259 resultados para Minimisation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of a sodium hypochlorite solution as a cleaning reagent is common practice among many laboratories for contamination minimisation purposes. Whilst its effectiveness in the decontamination of tools and surfaces has been verified at specific concentrations, it has not yet been established whether any residual sodium hypochlorite potentially remaining on tools/surfaces following cleaning has a detrimental effect if direct contact is made with an exhibit containing DNA. To investigate the effect of residual hypochlorite, surfaces were treated with 10% hypochlorite (air-dried or wiped dry), 1% hypochlorite (air-dried or wiped dry), or 1% hypochlorite (wiped dry) followed by the application of water (wiped dry). Treated surfaces came into contact with surfaces carrying 200. ng of DNA within 100. μL, or 20. ng within 20. μL. To observe the potential degrading effects of sodium hypochlorite, the quantity and quality of DNA within DNA deposits following contact with treated and untreated surfaces were compared. Overall, no degrading effect on DNA quantity/quality was observed, with the exception of DNA deposits that came into contact with surfaces treated with 10% hypochlorite and air-dried. It is therefore recommended that surfaces cleaned with high concentrations of hypochlorite be wiped dry or rinsed with an appropriate agent (water) following application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) is a widely used technique for acquiring images of human organs/tissues. Due to its complex imaging process, it consumes a lot of time to produce a high quality image. Compressive Sensing (CS) has been used by researchers for rapid MRI. It uses a global sparsity constraint with variable density random sampling and L1 minimisation. This work intends to speed up the imaging process by exploiting the non-uniform sparsity in the MR images. Locally Sparsified CS suggests that the image can be even better sparsified by applying local sparsity constraints. The image produced by local CS can further reduce the sample set. This paper establishes the basis for a methodology to exploit non-uniform nature of sparsity and to make the MRI process time efficient by using local sparsity constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss geometric properties related to the minimisation of a portfolio kurtosis given its first two odd moments, considering a risk-less asset and allowing for short sales. The findings are generalised for the minimisation of any given even portfolio moment with fixed excess return and skewness, and then for the case in which only excess return is constrained. An example with two risky assets provides a better insight on the problems related to the solutions. The importance of the geometric properties and their use in the higher moments portfolio choice context is highlighted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first paper of this paper (Part I), conditions were presented for the gas cleaning technological route for environomic optimisation of a cogeneration system based in a thermal cycle with municipal solid waste incineration. In this second part, an environomic analysis is presented of a cogeneration system comprising a combined cycle composed of a gas cycle burning natural gas with a heat recovery steam generator with no supplementary burning and a steam cycle burning municipal solid wastes (MSW) to which will be added a pure back pressure steam turbine (another one) of pure condensation. This analysis aims to select, concerning some scenarios, the best atmospheric pollutant emission control routes (rc) according to the investment cost minimisation, operation and social damage criteria. In this study, a comparison is also performed with the results obtained in the Case Study presented in Part I. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research of advanced technologies for energy generation contemplates a series of alternatives that are introduced both in the investigation of new energy sources and in the improvement and/or development of new components and systems. Even though significant reductions are observed in the amount of emissions, the proposed alternatives require the use of exhaust gases cleaning systems. The results of environmental analyses based on two configurations proposed for urban waste incineration are presented in this paper; the annexation of integer (Boolean) variables to the environomic model makes it possible to define the best gas cleaning routes based on exergetic cost minimisation criteria. In this first part, the results for steam cogeneration system analysis associated with the incineration of municipal solid wastes (MSW) is presented. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The history match procedure in an oil reservoir is of paramount importance in order to obtain a characterization of the reservoir parameters (statics and dynamics) that implicates in a predict production more perfected. Throughout this process one can find reservoir model parameters which are able to reproduce the behaviour of a real reservoir.Thus, this reservoir model may be used to predict production and can aid the oil file management. During the history match procedure the reservoir model parameters are modified and for every new set of reservoir model parameters found, a fluid flow simulation is performed so that it is possible to evaluate weather or not this new set of parameters reproduces the observations in the actual reservoir. The reservoir is said to be matched when the discrepancies between the model predictions and the observations of the real reservoir are below a certain tolerance. The determination of the model parameters via history matching requires the minimisation of an objective function (difference between the observed and simulated productions according to a chosen norm) in a parameter space populated by many local minima. In other words, more than one set of reservoir model parameters fits the observation. With respect to the non-uniqueness of the solution, the inverse problem associated to history match is ill-posed. In order to reduce this ambiguity, it is necessary to incorporate a priori information and constraints in the model reservoir parameters to be determined. In this dissertation, the regularization of the inverse problem associated to the history match was performed via the introduction of a smoothness constraint in the following parameter: permeability and porosity. This constraint has geological bias of asserting that these two properties smoothly vary in space. In this sense, it is necessary to find the right relative weight of this constrain in the objective function that stabilizes the inversion and yet, introduces minimum bias. A sequential search method called COMPLEX was used to find the reservoir model parameters that best reproduce the observations of a semi-synthetic model. This method does not require the usage of derivatives when searching for the minimum of the objective function. Here, it is shown that the judicious introduction of the smoothness constraint in the objective function formulation reduces the associated ambiguity and introduces minimum bias in the estimates of permeability and porosity of the semi-synthetic reservoir model

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reactive-optimisation procedures are responsible for the minimisation of online power losses in interconnected systems. These procedures are performed separately at each control centre and involve external network representations. If total losses can be minimised by the implementation of calculated local control actions, the entire system benefits economically, but such control actions generally result in a certain degree of inaccuracy, owing to errors in the modelling of the external system. Since these errors are inevitable, they must at least be maintained within tolerable limits by external-modelling approaches. Care must be taken to avoid unrealistic loss minimisation, as the local-control actions adopted can lead the system to points of operation which will be less economical for the interconnected system as a whole. The evaluation of the economic impact of the external modelling during reactive-optimisation procedures in interconnected systems, in terms of both the amount of losses and constraint violations, becomes important in this context. In the paper, an analytical approach is proposed for such an evaluation. Case studies using data from the Brazilian South-Southeast system (810 buses) have been carried out to compare two different external-modelling approaches, both derived from the equivalent-optimal-power-flow (EOPF) model. Results obtained show that, depending on the external-model representation adopted, the loss representation can be flawed. Results also suggest some modelling features that should be adopted in the EOPF model to enhance the economy of the overall system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is related with the proposition of a so-called regular or convex solver potential to be used in numerical simulations involving a certain class of constitutive elastic-damage models. All the mathematical aspects involved are based on convex analysis, which is employed aiming a consistent variational formulation of the potential and its conjugate one. It is shown that the constitutive relations for the class of damage models here considered can be derived from the solver potentials by means of sub-differentials sets. The optimality conditions of the resulting minimisation problem represent in particular a linear complementarity problem. Finally, a simple example is present in order to illustrate the possible integration errors that can be generated when finite step analysis is performed. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Piecewise-Linear Programming (PLP) is an important area of Mathematical Programming and concerns the minimisation of a convex separable piecewise-linear objective function, subject to linear constraints. In this paper a subarea of PLP called Network Piecewise-Linear Programming (NPLP) is explored. The paper presents four specialised algorithms for NPLP: (Strongly Feasible) Primal Simplex, Dual Method, Out-of-Kilter and (Strongly Polynomial) Cost-Scaling and their relative efficiency is studied. A statistically designed experiment is used to perform a computational comparison of the algorithms. The response variable observed in the experiment is the CPU time to solve randomly generated network piecewise-linear problems classified according to problem class (Transportation, Transshipment and Circulation), problem size, extent of capacitation, and number of breakpoints per arc. Results and conclusions on performance of the algorithms are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As medidas de resistividade são de fundamental importância para o cálculo da saturação de óleo em reservatórios potencialmente produtores. A combinação das medidas de resistividade rasa e profunda permite a obtenção dos parâmetros Rt, Rxo e di. Mas, em reservatórios complexos existem dificuldades em se obter leituras confiáveis de Rt, devido à baixa resolução vertical das ferramentas de investigação profunda. Em reservatórios laminados, por exemplo, as leituras obtidas pela ferramenta de indução profunda (ILD) podem levar a uma interpretação errônea das mesmas, levando a acreditar que as medidas obtidas do perfil referem-se a uma única camada. Este problema pode ser em parte resolvido através de uma metodologia que melhore a resolução vertical dos perfis de investigação profunda, valendo-se do uso de informações obtidas de um perfil de alta resolução vertical, i.e; a curva de resistividade rasa. Uma abordagem neste sentido seria usar um perfil de alta resolução que apresente uma boa correlação com o perfil de investigação profunda. Esta correlação pode ser melhor avaliada se aplicarmos um filtro no perfil de alta resolução, de tal forma que o perfil resultante tenha teoricamente a mesma resolução vertical do perfil de baixa resolução. A obtenção deste filtro, porém, recai na premissa de que as funções respostas verticais para as ferramentas de alta e baixa resolução são disponíveis, o que não ocorre na prática. Este trabalho se propõe mostrar uma nova abordagem onde o filtro pode ser obtido a partir de um tratamento no domínio da freqüência. Tal tratamento visa igualar a energia espectral do perfil de alta resolução à energia espectral do perfil de baixa resolução tendo como base o Teorema de Parseval. Será mostrado que a resolução vertical depende fundamentalmente da energia espectral do perfil em questão. A seguir, uma regressão linear é aplicada sobre os perfis de alta resolução filtrado e de baixa resolução. Para cada ponto amostrado dos perfis, uma rotina de minimização é aplicada visando escolher o melhor intervalo de correlação entre os perfis. Finalmente, um fator de correção é aplicado sobre cada ponto do perfil de baixa resolução. Os resultados obtidos com os perfis de indução são promissores, demonstrando a eficácia da abordagem e mesmo quando aplicada em perfis com diferentes propriedades petrofísicas, a metodologia funcionou satisfatoriamente, sem danificar os perfis originais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional procedures for rainfall-runoff model calibration are generally based on the fit of the individual values of simulated and observed hydrographs. It is used here an alternative option that is carried out by matching, in the optimisation process, a set of statistics of the river flow. Such approach has the additional, significant advantage to allow also a straightforward regional calibration of the model parameters, based on the regionalisation of the selected statistics. The minimisation of the set of objective functions is carried out by using the AMALGAM algorithm, leading to the identification of behavioural parameter sets. The procedure is applied to a set of river basins located in central Italy: the basins are treated alternatively as gauged and ungauged and, as a term of comparison, the results obtained with a traditional time-domain calibration is also presented. The results show that a suitable choice of the statistics to be optimised leads to interesting results in real world case studies as far as the reproduction of the different flow regimes is concerned.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.