903 resultados para Geo-statistical model
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
Steady-state procedures, of their very nature, cannot deal with dynamic situations. Statistical models require extensive calibration, and predictions often have to be made for environmental conditions which are often outside the original calibration conditions. In addition, the calibration requirement makes them difficult to transfer to other lakes. To date, no computer programs have been developed which will successfully predict changes in species of algae. The obvious solution to these limitations is to apply our limnological knowledge to the problem and develop functional models, so reducing the requirement for such rigorous calibration. Reynolds has proposed a model, based on fundamental principles of algal response to environmental events, which has successfully recreated the maximum observed biomass, the timing of events and a fair simulation of the species succession in several lakes. A forerunner of this model was developed jointly with Welsh Water under contract to Messrs. Wallace Evans and Partners, for use in the Cardiff Bay Barrage study. In this paper the authors test a much developed form of this original model against a more complex data-set and, using a simple example, show how it can be applied as an aid in the choice of management strategy for the reduction of problems caused by eutrophication. Some further developments of the model are indicated.
Resumo:
English: We describe an age-structured statistical catch-at-length analysis (A-SCALA) based on the MULTIFAN-CL model of Fournier et al. (1998). The analysis is applied independently to both the yellowfin and the bigeye tuna populations of the eastern Pacific Ocean (EPO). We model the populations from 1975 to 1999, based on quarterly time steps. Only a single stock for each species is assumed for each analysis, but multiple fisheries that are spatially separate are modeled to allow for spatial differences in catchability and selectivity. The analysis allows for error in the effort-fishing mortality relationship, temporal trends in catchability, temporal variation in recruitment, relationships between the environment and recruitment and between the environment and catchability, and differences in selectivity and catchability among fisheries. The model is fit to total catch data and proportional catch-at-length data conditioned on effort. The A-SCALA method is a statistical approach, and therefore recognizes that the data collected from the fishery do not perfectly represent the population. Also, there is uncertainty in our knowledge about the dynamics of the system and uncertainty about how the observed data relate to the real population. The use of likelihood functions allow us to model the uncertainty in the data collected from the population, and the inclusion of estimable process error allows us to model the uncertainties in the dynamics of the system. The statistical approach allows for the calculation of confidence intervals and the testing of hypotheses. We use a Bayesian version of the maximum likelihood framework that includes distributional constraints on temporal variation in recruitment, the effort-fishing mortality relationship, and catchability. Curvature penalties for selectivity parameters and penalties on extreme fishing mortality rates are also included in the objective function. The mode of the joint posterior distribution is used as an estimate of the model parameters. Confidence intervals are calculated using the normal approximation method. It should be noted that the estimation method includes constraints and priors and therefore the confidence intervals are different from traditionally calculated confidence intervals. Management reference points are calculated, and forward projections are carried out to provide advice for making management decisions for the yellowfin and bigeye populations. Spanish: Describimos un análisis estadístico de captura a talla estructurado por edad, A-SCALA (del inglés age-structured statistical catch-at-length analysis), basado en el modelo MULTIFAN- CL de Fournier et al. (1998). Se aplica el análisis independientemente a las poblaciones de atunes aleta amarilla y patudo del Océano Pacífico oriental (OPO). Modelamos las poblaciones de 1975 a 1999, en pasos trimestrales. Se supone solamente una sola población para cada especie para cada análisis, pero se modelan pesquerías múltiples espacialmente separadas para tomar en cuenta diferencias espaciales en la capturabilidad y selectividad. El análisis toma en cuenta error en la relación esfuerzo-mortalidad por pesca, tendencias temporales en la capturabilidad, variación temporal en el reclutamiento, relaciones entre el medio ambiente y el reclutamiento y entre el medio ambiente y la capturabilidad, y diferencias en selectividad y capturabilidad entre pesquerías. Se ajusta el modelo a datos de captura total y a datos de captura a talla proporcional condicionados sobre esfuerzo. El método A-SCALA es un enfoque estadístico, y reconoce por lo tanto que los datos obtenidos de la pesca no representan la población perfectamente. Además, hay incertidumbre en nuestros conocimientos de la dinámica del sistema e incertidumbre sobre la relación entre los datos observados y la población real. El uso de funciones de verosimilitud nos permite modelar la incertidumbre en los datos obtenidos de la población, y la inclusión de un error de proceso estimable nos permite modelar las incertidumbres en la dinámica del sistema. El enfoque estadístico permite calcular intervalos de confianza y comprobar hipótesis. Usamos una versión bayesiana del marco de verosimilitud máxima que incluye constreñimientos distribucionales sobre la variación temporal en el reclutamiento, la relación esfuerzo-mortalidad por pesca, y la capturabilidad. Se incluyen también en la función objetivo penalidades por curvatura para los parámetros de selectividad y penalidades por tasas extremas de mortalidad por pesca. Se usa la moda de la distribución posterior conjunta como estimación de los parámetros del modelo. Se calculan los intervalos de confianza usando el método de aproximación normal. Cabe destacar que el método de estimación incluye constreñimientos y distribuciones previas y por lo tanto los intervalos de confianza son diferentes de los intervalos de confianza calculados de forma tradicional. Se calculan puntos de referencia para el ordenamiento, y se realizan proyecciones a futuro para asesorar la toma de decisiones para el ordenamiento de las poblaciones de aleta amarilla y patudo.
Resumo:
In experiments, we have found an abnormal relationship between probability of laser induced damage and number density of surface inclusion. From results of X-ray diffraction (XRD) and laser induced damage, we have drawn a conclusion that bulk inclusion plays a key role in damage process. Combining thermo-mechanical damage process and statistics of inclusion density distribution, we have deduced an equation which reflects the relationship between probability of laser induced damage, number density of inclusion, power density of laser pulse, and thickness of films. This model reveals that relationship between critical sizes of the dangerous inclusions (dangerous inclusions refer to the inclusions which can initialize film damage), embedded depth of inclusions, thermal diffusion length and tensile strength of films. This model develops the former work which is the statistics about surface inclusion. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Paired-tow calibration studies provide information on changes in survey catchability that may occur because of some necessary change in protocols (e.g., change in vessel or vessel gear) in a fish stock survey. This information is important to ensure the continuity of annual time-series of survey indices of stock size that provide the basis for fish stock assessments. There are several statistical models used to analyze the paired-catch data from calibration studies. Our main contributions are results from simulation experiments designed to measure the accuracy of statistical inferences derived from some of these models. Our results show that a model commonly used to analyze calibration data can provide unreliable statistical results when there is between-tow spatial variation in the stock densities at each paired-tow site. However, a generalized linear mixed-effects model gave very reliable results over a wide range of spatial variations in densities and we recommend it for the analysis of paired-tow survey calibration data. This conclusion also applies if there is between-tow variation in catchability.
Resumo:
The effects of damping on energy sharing in coupled systems are investigated. The approach taken is to compute the forced response patterns of various idealised systems, and from these to calculate the parameters of Statistical Energy Analysis model for the systems using the matrix inversion approach [1]. It is shown that when SEA models are fitted by this procedure, the values of the coupling loss factors are significantly dependent on damping except when it is sufficiently high. For very lightly damped coupled systems, varying the damping causes the values of the coupling loss factor to vary in direct proportion to the internal loss factor. In the limit of zero damping, the coupling loss factors tend to zero. This is a view which contrasts strongly with 'classical' SEA, in which coupling loss factors are determined by the nature of the coupling between subsystems, independent of subsystem damping. One implication of the strong damping dependency is that equipartition of modal energy under low damping does not in general occur. This is contrary to the classical SEA prediction that equipartition of modal energy always occurs if the damping can be reduced to a sufficiently small value. It is demonstrated that the use of this classical assumption can lead to gross overestimates of subsystem energy ratios, especially in multi-subsystem structures. © 1996 Academic Press Limited.
Resumo:
We present a method to integrate environmental time series into stock assessment models and to test the significance of correlations between population processes and the environmental time series. Parameters that relate the environmental time series to population processes are included in the stock assessment model, and likelihood ratio tests are used to determine if the parameters improve the fit to the data significantly. Two approaches are considered to integrate the environmental relationship. In the environmental model, the population dynamics process (e.g. recruitment) is proportional to the environmental variable, whereas in the environmental model with process error it is proportional to the environmental variable, but the model allows an additional temporal variation (process error) constrained by a log-normal distribution. The methods are tested by using simulation analysis and compared to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. In the traditional method, the estimates of recruitment were provided by a model that allowed the recruitment only to have a temporal variation constrained by a log-normal distribution. We illustrate the methods by applying them to test the statistical significance of the correlation between sea-surface temperature (SST) and recruitment to the snapper (Pagrus auratus) stock in the Hauraki Gulf–Bay of Plenty, New Zealand. Simulation analyses indicated that the integrated approach with additional process error is superior to the traditional method of correlating model estimates with environmental variables outside the estimation procedure. The results suggest that, for the snapper stock, recruitment is positively correlated with SST at the time of spawning.
Resumo:
Recreational fisheries in the waters off the northeast U.S. target a variety of pelagic and demersal fish species, and catch and effort data sampled from recreational fisheries are a critical component of the information used in resource evaluation and management. Standardized indices of stock abundance developed from recreational fishery catch rates are routinely used in stock assessments. The statistical properties of both simulated and empirical recreational fishery catch-rate data such as those collected by the National Marine Fisheries Service (NMFS) Marine Recreational Fishery Statistics Survey (MRFSS) are examined, and the potential effects of different assumptions about the error structure of the catch-rate frequency distributions in computing indices of stock abundance are evaluated. Recreational fishery catch distributions sampled by the MRFSS are highly contagious and overdispersed in relation to the normal distribution and are generally best characterized by the Poisson or negative binomial distributions. The modeling of both the simulated and empirical MRFSS catch rates indicates that one may draw erroneous conclusions about stock trends by assuming the wrong error distribution in procedures used to developed standardized indices of stock abundance. The results demonstrate the importance of considering not only the overall model fit and significance of classification effects, but also the possible effects of model misspecification, when determining the most appropriate model construction.
Resumo:
The distribution of fish caught by experimental gill nets has been found to be in the Poisson or Negative binomial form. Using this information, application of Chi-square test as suggested by Mood et al. (1974) has been illustrated, for comparing the efficiencies of gill nets. This test provides an alternative to Anova F-test especially in the context of significance of non-additivity for the two-way model. Based on the present work and the findings by Nair (1982) and Nair & Alagaraja (1982, 1984) an outline approach for statistical comparison of the efficiencies of fishing gear is presented.
Resumo:
Condition-based maintenance is concerned with the collection and interpretation of data to support maintenance decisions. The non-intrusive nature of vibration data enables the monitoring of enclosed systems such as gearboxes. It remains a significant challenge to analyze vibration data that are generated under fluctuating operating conditions. This is especially true for situations where relatively little prior knowledge regarding the specific gearbox is available. It is therefore investigated how an adaptive time series model, which is based on Bayesian model selection, may be used to remove the non-fault related components in the structural response of a gear assembly to obtain a residual signal which is robust to fluctuating operating conditions. A statistical framework is subsequently proposed which may be used to interpret the structure of the residual signal in order to facilitate an intuitive understanding of the condition of the gear system. The proposed methodology is investigated on both simulated and experimental data from a single stage gearbox. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Statistical dependencies among wavelet coefficients are commonly represented by graphical models such as hidden Markov trees (HMTs). However, in linear inverse problems such as deconvolution, tomography, and compressed sensing, the presence of a sensing or observation matrix produces a linear mixing of the simple Markovian dependency structure. This leads to reconstruction problems that are non-convex optimizations. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods. In this paper, we propose new modeling approaches based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. We show that the methods we develop perform significantly better in de-convolution and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches such as lasso. © 2011 IEEE.