933 resultados para rainfall-runoff empirical statistical model
Resumo:
Estuaries are areas which, from their structure, their fonctioning, and their localisation, are subject to significant contribution of nutrients. One of the objectif of the RNO, the French network for coastal water quality monitoring, is to assess the levels and trends of nutrient concentrations in estuaries. A linear model was used in order to describe and to explain the total dissolved nitrogen concentration evolution in the three most important estuaries on the Chanel-Atlantic front (Seine, Loire and Gironde). As a first step, the selection of a reliable data set was performed. Then total dissolved nitrogen evolution schemes in estuary environment were graphically studied, and allowed a resonable choice of covariables. The salinity played a major role in explaining nitrogen concentration variability in estuary, and dilution lines were proved to be a useful tool to detect outlying observations and to model the nitrogenlsalinity relation. Increasing trends were detected by the model, with a high magnitude in Seine, intermediate in Loire, and lower in Gironde. The non linear trend estimated in Loire and Seine estuaries could be due to important interannual variations as suggest in graphics. In the objective of the QUADRIGE database valorisation, a discussion on the statistical model, and on the RNO hydrological data sampling strategy, allowed to formulate suggestions towards a better exploitation of nutrient data.
Resumo:
In this thesis, wind wave prediction and analysis in the Southern Caspian Sea are surveyed. Because of very much importance and application of this matter in reducing vital and financial damages or marine activities, such as monitoring marine pollution, designing marine structure, shipping, fishing, offshore industry, tourism and etc, gave attention by some marine activities. In this study are used the Caspian Sea topography data that are extracted from the Caspian Sea Hydrography map of Iran Armed Forces Geographical Organization and the I 0 meter wind field data that are extracted from the transmitted GTS synoptic data of regional centers to Forecasting Center of Iran Meteorological Organization for wave prediction and is used the 20012 wave are recorded by the oil company's buoy that was located at distance 28 Kilometers from Neka shore for wave analysis. The results of this research are as follows: - Because of disagreement between the prediction results of SMB method in the Caspian sea and wave data of the Anzali and Neka buoys. The SMB method isn't able to Predict wave characteristics in the Southern Caspian Sea. - Because of good relativity agreement between the WAM model output in the Caspian Sea and wave data of the Anzali buoy. The WAM model is able to predict wave characteristics in the southern Caspian Sea with high relativity accuracy. The extreme wave height distribution function for fitting to the Southern Caspian Sea wave data is obtained by determining free parameters of Poisson-Gumbel function through moment method. These parameters are as below: A=2.41, B=0.33. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by above function with the wave data of Neka buoy is about %35. The 100-year return value of the Southern Caspian Sea significant height wave is about 4.97 meter. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by statistical model of peak over threshold with the wave data of Neka buoy is about %2.28. The parametric relation for fitting to the Southern Caspian Sea frequency spectra is obtained by determining free parameters of the Strekalov, Massel and Krylov etal_ multipeak spectra through mathematical method. These parameters are as below: A = 2.9 B=26.26, C=0.0016 m=0.19 and n=3.69. The maximum relative error between calculated free parameters of the Southern Caspian Sea multipeak spectrum with the proposed free parameters of double-peaked spectrum by Massel and Strekalov on the experimental data from the Caspian Sea is about 36.1 % in spectrum energetic part and is about 74M% in spectrum high frequency part. The peak over threshold waverose of the Southern Caspian Sea shows that maximum occurrence probability of wave height is relevant to waves with 2-2.5 meters wave fhe error sources in the statistical analysis are mainly due to: l) the missing wave data in 2 years duration through battery discharge of Neka buoy. 2) the deportation %15 of significant height annual mean in single year than long period average value that is caused by lack of adequate measurement on oceanic waves, and the error sources in the spectral analysis are mainly due to above- mentioned items and low accurate of the proposed free parameters of double-peaked spectrum on the experimental data from the Caspian Sea.
Resumo:
Spent hydroprocessing catalysts (HPCs) are solid wastes generated in refinery industries and typically contain various hazardous metals, such as Co, Ni, and Mo. These wastes cannot be discharged into the environment due to strict regulations and require proper treatment to remove the hazardous substances. Various options have been proposed and developed for spent catalysts treatment; however, hydrometallurgical processes are considered efficient, cost-effective and environmentally-friendly methods of metal extraction, and have been widely employed for different metal uptake from aqueous leachates of secondary materials. Although there are a large number of studies on hazardous metal extraction from aqueous solutions of various spent catalysts, little information is available on Co, Ni, and Mo removal from spent NiMo hydroprocessing catalysts. In the current study, a solvent extraction process was applied to the spent HPC to specifically remove Co, Ni, and Mo. The spent HPC is dissolved in an acid solution and then the metals are extracted using three different extractants, two of which were aminebased and one which was a quaternary ammonium salt. The main aim of this study was to develop a hydrometallurgical method to remove, and ultimately be able to recover, Co, Ni, and Mo from the spent HPCs produced at the petrochemical plant in Come By Chance, Newfoundland and Labrador. The specific objectives of the study were: (1) characterization of the spent catalyst and the acidic leachate, (2) identifying the most efficient leaching agent to dissolve the metals from the spent catalyst; (3) development of a solvent extraction procedure using the amine-based extractants Alamine308, Alamine336 and the quaternary ammonium salt, Aliquat336 in toluene to remove Co, Ni, and Mo from the spent catalyst; (4) selection of the best reagent for Co, Ni, and Mo extraction based on the required contact time, required extractant concentration, as well as organic:aqueous ratio; and (5) evaluation of the extraction conditions and optimization of the metal extraction process using the Design Expert® software. For the present study, a Central Composite Design (CCD) method was applied as the main method to design the experiments, evaluate the effect of each parameter, provide a statistical model, and optimize the extraction process. Three parameters were considered as the most significant factors affecting the process efficiency: (i) extractant concentration, (ii) the organic:aqueous ratio, and (iii) contact time. Metal extraction efficiencies were calculated based on ICP analysis of the pre- and post–leachates, and the process optimization was conducted with the aid of the Design Expert® software. The obtained results showed that Alamine308 can be considered to be the most effective and suitable extractant for spent HPC examined in the study. Alamine308 is capable of removing all three metals to the maximum amounts. Aliquat336 was found to be not as effective, especially for Ni extraction; however, it is able to separate all of these metals within the first 10 min, unlike Alamine336, which required more than 35 min to do so. Based on the results of this study, a cost-effective and environmentally-friendly solventextraction process was achieved to remove Co, Ni, and Mo from the spent HPCs in a short amount of time and with the low extractant concentration required. This method can be tested and implemented for other hazardous metals from other secondary materials as well. Further investigation may be required; however, the results of this study can be a guide for future research on similar metal extraction processes.
Resumo:
O sector avícola enfrenta atualmente dois desafios muito estimulantes. O primeiro decorre do aumento, que se prevê continuar a crescer, nos níveis de procura de carne de aves no mercado interno e internacional; o segundo decorre do facto da criação avícola ter adotado métodos de produção mais intensivos (kg peso vivo/m2/ano) e em maior escala, i.e. com maior concentração animal na mesma exploração. Este carácter vincadamente “industrial” tem merecido uma natural atenção das sociedades e das autoridades pecuárias no sentido desta economia de escala passar a ter num conjunto de instrumentos legais e técnicos o devido contrapeso para a salvaguarda das aves enquanto ser vivo. O presente trabalho tem como ponto de partida a Directiva 2007/43/CE do Conselho de 28 de Junho, relativa ao estabelecimento de regras mínimas para a proteção de frangos de carne. Em virtude de não existir ainda informação suficiente sobre a forma como a qualidade do maneio animal pode ser monitorizada, ao nível do abate, por médicos veterinários e auxiliares oficiais, em frangos de criação especial segundo os modelos definidos no Regulamento (CE) n.º 543/2008, urge realizar estudos neste domínio. O principal objetivo da realização do presente trabalho de campo foi o estudo da ocorrência das dermatites de contacto plantar (pododermatites) e da bolsa sinovial préesternal em frangos produzidos em sistemas de produção considerados “protetores” do bem-estar animal, designadamente os seguintes: i) ar livre; e, ii) extensivo de interior. O estudo foi efetuado num centro de abate de frangos do campo, em Oliveira de Frades, entre Maio de 20012 e Março de 2013. Os animais abatidos foram criados em explorações com contratos de integração situadas no Distrito de Viseu. Os dados foram recolhidos em 39 bandos diferentes da espécie Gallus domesticus, dos quais 1021 carcaças foram avaliadas após evisceração, o que correspondeu ao exame de uma a cada quinze aves da linha de abate. Para a avaliação da pododermatite foi utilizado o método adaptado pela DGAV, enquanto para a avaliação da bursite esternal foi efetuada tendo em conta o modelo aplicado em perus por Berk em 2002. Apesar do modelo estatístico desenvolvido para a análise dos resultados obtidos no presente trabalho exigir um maior número de observações, foi possível identificar com grande precisão alguns fatores de risco que devem ser realçados pela sua relevância no contexto dos sistemas produtivos escrutinados ou no mecanismo fisiopatológico da dermatite de contacto, nomeadamente os seguintes: (i) a idade das aves que, apesar de não ter sido identificada uma relação directa com os scores de pododermatite e bursite, verificou-se que a idade elevada que os animais tipicamente atingem nos sistemas de produção extensivos está associada a uma taxa superior de rejeições pela inspecção sanitária; (ii) o peso pré-abate que, independentemente da inconsistência defendida por diversos autores em relação à influência do peso vivo do frango industrial sobre a dermatite de contacto, nos animais produzidos em regime extensivo, esta variável pode desempenhar um fator chave para a ocorrência desta lesão. De facto, há que realçar que o peso destes animais tem uma importância fulcral na modelação da biomecânica da ave, incluindo na pressão exercida sobre a superfície plantar; (iii) o tipo de sistema de abeberamento, tendo ficado demonstrado que a selecção do tipo de bebedouro tem uma importância peculiar sobre a ocorrência de pododermatite em “frango de campo”, algo que está provavelmente relacionado com a influência exercida sobre o teor de humidade da cama. Globalmente, as frequências de pododermatite e bursite apuradas neste trabalho devem ser consideradas inquietantes. Esta preocupação eleva-se quando se toma consciência que as aves provieram de regimes considerados “amigáveis” e “sustentáveis”, pelo que urge monitorizar adequadamente aqueles sistemas produtivos, melhorar as suas condições e reanalisar os benefícios ao nível do bem-estar animal.
Resumo:
Five years of SMOS L-band brightness temperature data intercepting a large number of tropical cyclones (TCs) are analyzed. The storm-induced half-power radio-brightness contrast (ΔI) is defined as the difference between the brightness observed at a specific wind force and that for a smooth water surface with the same physical parameters. ΔI can be related to surface wind speed and has been estimated for ~ 300 TCs that intercept with SMOS measurements. ΔI, expressed in a common storm-centric coordinate system, shows that mean brightness contrast monotonically increases with increased storm intensity ranging from ~ 5 K for strong storms to ~ 24 K for the most intense Category 5 TCs. A remarkable feature of the 2D mean ΔI fields and their variability is that maxima are systematically found on the right quadrants of the storms in the storm-centered coordinate frame, consistent with the reported asymmetric structure of the wind and wave fields in hurricanes. These results highlight the strong potential of SMOS measurements to improve monitoring of TC intensification and evolution. An improved empirical geophysical model function (GMF) was derived using a large ensemble of co-located SMOS ΔI, aircraft and H*WIND (a multi-measurement analysis) surface wind speed data. The GMF reveals a quadratic relationship between ΔI and the surface wind speed at a height of 10 m (U10). ECMWF and NCEP analysis products and SMOS derived wind speed estimates are compared to a large ensemble of H*WIND 2D fields. This analysis confirms that the surface wind speed in TCs can effectively be retrieved from SMOS data with an RMS error on the order of 10 kt up to 100 kt. SMOS wind speed products above hurricane force (64 kt) are found to be more accurate than those derived from NWP analyses products that systematically underestimate the surface wind speed in these extreme conditions. Using co-located estimates of rain rate, we show that the L-band radio-brightness contrasts could be weakly affected by rain or ice-phase clouds and further work is required to refine the GMF in this context.
Resumo:
O prognóstico da perda dentária é um dos principais problemas na prática clínica de medicina dentária. Um dos principais fatores prognósticos é a quantidade de suporte ósseo do dente, definido pela área da superfície radicular dentária intraóssea. A estimação desta grandeza tem sido realizada por diferentes metodologias de investigação com resultados heterogéneos. Neste trabalho utilizamos o método da planimetria com microtomografia para calcular a área da superfície radicular (ASR) de uma amostra de cinco dentes segundos pré-molares inferiores obtida da população portuguesa, com o objetivo final de criar um modelo estatístico para estimar a área de superfície radicular intraóssea a partir de indicadores clínicos da perda óssea. Por fim propomos um método para aplicar os resultados na prática. Os dados referentes à área da superfície radicular, comprimento total do dente (CT) e dimensão mésio-distal máxima da coroa (MDeq) serviram para estabelecer as relações estatísticas entre variáveis e definir uma distribuição normal multivariada. Por fim foi criada uma amostra de 37 observações simuladas a partir da distribuição normal multivariada definida e estatisticamente idênticas aos dados da amostra de cinco dentes. Foram ajustados cinco modelos lineares generalizados aos dados simulados. O modelo estatístico foi selecionado segundo os critérios de ajustamento, preditibilidade, potência estatística, acurácia dos parâmetros e da perda de informação, e validado pela análise gráfica de resíduos. Apoiados nos resultados propomos um método em três fases para estimação área de superfície radicular perdida/remanescente. Na primeira fase usamos o modelo estatístico para estimar a área de superfície radicular, na segunda estimamos a proporção (decis) de raiz intraóssea usando uma régua de Schei adaptada e na terceira multiplicamos o valor obtido na primeira fase por um coeficiente que representa a proporção de raiz perdida (ASRp) ou da raiz remanescente (ASRr) para o decil estimado na segunda fase. O ponto forte deste estudo foi a aplicação de metodologia estatística validada para operacionalizar dados clínicos na estimação de suporte ósseo perdido. Como pontos fracos consideramos a aplicação destes resultados apenas aos segundos pré-molares mandibulares e a falta de validação clínica.
Resumo:
Mestrado em Economia e Gestão de Ciência, Tecnologia e Inovação
Resumo:
Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.
Resumo:
Deep bed filtration occurs in several industrial and environmental processes like water filtration and soil contamination. In petroleum industry, deep bed filtration occurs near to injection wells during water injection, causing injectivity reduction. It also takes place during well drilling, sand production control, produced water disposal in aquifers, etc. The particle capture in porous media can be caused by different physical mechanisms (size exclusion, electrical forces, bridging, gravity, etc). A statistical model for filtration in porous media is proposed and analytical solutions for suspended and retained particles are derived. The model, which incorporates particle retention probability, is compared with the classical deep bed filtration model allowing a physical interpretation of the filtration coefficients. Comparison of the obtained analytical solutions for the proposed model with the classical model solutions allows concluding that the larger the particle capture probability, the larger the discrepancy between the proposed and the classical models
Resumo:
Several deterministic and probabilistic methods are used to evaluate the probability of seismically induced liquefaction of a soil. The probabilistic models usually possess some uncertainty in that model and uncertainties in the parameters used to develop that model. These model uncertainties vary from one statistical model to another. Most of the model uncertainties are epistemic, and can be addressed through appropriate knowledge of the statistical model. One such epistemic model uncertainty in evaluating liquefaction potential using a probabilistic model such as logistic regression is sampling bias. Sampling bias is the difference between the class distribution in the sample used for developing the statistical model and the true population distribution of liquefaction and non-liquefaction instances. Recent studies have shown that sampling bias can significantly affect the predicted probability using a statistical model. To address this epistemic uncertainty, a new approach was developed for evaluating the probability of seismically-induced soil liquefaction, in which a logistic regression model in combination with Hosmer-Lemeshow statistic was used. This approach was used to estimate the population (true) distribution of liquefaction to non-liquefaction instances of standard penetration test (SPT) and cone penetration test (CPT) based most updated case histories. Apart from this, other model uncertainties such as distribution of explanatory variables and significance of explanatory variables were also addressed using KS test and Wald statistic respectively. Moreover, based on estimated population distribution, logistic regression equations were proposed to calculate the probability of liquefaction for both SPT and CPT based case history. Additionally, the proposed probability curves were compared with existing probability curves based on SPT and CPT case histories.
Resumo:
The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
The paper discusses the observed and projected warming in the Caucasus region and its implications for glacier melt and runoff. A strong positive trend in summer air temperatures of 0.05 degrees C a(-1) is observed in the high-altitude areas providing for a strong glacier melt and continuous decline in glacier mass balance. A warming of 4-7 degrees C and 3-5 degrees C is projected for the summer months in 2071-2100 under the A2 and B2 emission scenarios respectively, suggesting that enhanced glacier melt can be expected. The expected changes in winter precipitation will not compensate for the summer melt and glacier retreat is likely to continue. However, a projected small increase in both winter and summer precipitation combined with the enhanced glacier melt will result in increased summer runoff in the currently glaciated region of the Caucasus (independent of whether the region is glaciated at the end of the twenty-first century) by more than 50% compared with the baseline period.
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
Methods used to analyze one type of nonstationary stochastic processes?the periodically correlated process?are considered. Two methods of one-step-forward prediction of periodically correlated time series are examined. One-step-forward predictions made in accordance with an autoregression model and a model of an artificial neural network with one latent neuron layer and with an adaptation mechanism of network parameters in a moving time window were compared in terms of efficiency. The comparison showed that, in the case of prediction for one time step for time series of mean monthly water discharge, the simpler autoregression model is more efficient.