877 resultados para Non-stationary iterative method
Resumo:
2000 Mathematics Subject Classification: Primary 60J80, Secondary 62F12, 60G99.
Resumo:
Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.
Resumo:
The atmospheric seasonal cycle of the North Atlantic region is dominated by meridional movements of the circulation systems: from the tropics, where the West African Monsoon and extreme tropical weather events take place, to the extratropics, where the circulation is dominated by seasonal changes in the jetstream and extratropical cyclones. Climate variability over the North Atlantic is controlled by various mechanisms. Atmospheric internal variability plays a crucial role in the mid-latitudes. However, El Niño-Southern Oscillation (ENSO) is still the main source of predictability in this region situated far away from the Pacific. Although the ENSO influence over tropical and extra-tropical areas is related to different physical mechanisms, in both regions this teleconnection seems to be non-stationary in time and modulated by multidecadal changes of the mean flow. Nowadays, long observational records (greater than 100 years) and modeling projects (e.g., CMIP) permit detecting non-stationarities in the influence of ENSO over the Atlantic basin, and further analyzing its potential mechanisms. The present article reviews the ENSO influence over the Atlantic region, paying special attention to the stability of this teleconnection over time and the possible modulators. Evidence is given that the ENSO–Atlantic teleconnection is weak over the North Atlantic. In this regard, the multidecadal ocean variability seems to modulate the presence of teleconnections, which can lead to important impacts of ENSO and to open windows of opportunity for seasonal predictability.
Resumo:
Recent developments have made researchers to reconsider Lagrangian measurement techniques as an alternative to their Eulerian counterpart when investigating non-stationary flows. This thesis advances the state-of-the-art of Lagrangian measurement techniques by pursuing three different objectives: (i) developing new Lagrangian measurement techniques for difficult-to-measure, in situ flow environments; (ii) developing new post-processing strategies designed for unstructured Lagrangian data, as well as providing guidelines towards their use; and (iii) presenting the advantages that the Lagrangian framework has over their Eulerian counterpart in various non-stationary flow problems. Towards the first objective, a large-scale particle tracking velocimetry apparatus is designed for atmospheric surface layer measurements. Towards the second objective, two techniques, one for identifying Lagrangian Coherent Structures (LCS) and the other for characterizing entrainment directly from unstructured Lagrangian data, are developed. Finally, towards the third objective, the advantages of Lagrangian-based measurements are showcased in two unsteady flow problems: the atmospheric surface layer, and entrainment in a non-stationary turbulent flow. Through developing new experimental and post-processing strategies for Lagrangian data, and through showcasing the advantages of Lagrangian data in various non-stationary flows, the thesis works to help investigators to more easily adopt Lagrangian-based measurement techniques.
Resumo:
Min/max autocorrelation factor analysis (MAFA) and dynamic factor analysis (DFA) are complementary techniques for analysing short (> 15-25 y), non-stationary, multivariate data sets. We illustrate the two techniques using catch rate (cpue) time-series (1982-2001) for 17 species caught during trawl surveys off Mauritania, with the NAO index, an upwelling index, sea surface temperature, and an index of fishing effort as explanatory variables. Both techniques gave coherent results, the most important common trend being a decrease in cpue during the latter half of the time-series, and the next important being an increase during the first half. A DFA model with SST and UPW as explanatory variables and two common trends gave good fits to most of the cpue time-series. (c) 2004 International Council for the Exploration of the Sea. Published by Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional do Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Let us consider a large set of candidate parameter fields, such as hydraulic conductivity maps, on which we can run an accurate forward flow and transport simulation. We address the issue of rapidly identifying a subset of candidates whose response best match a reference response curve. In order to keep the number of calls to the accurate flow simulator computationally tractable, a recent distance-based approach relying on fast proxy simulations is revisited, and turned into a non-stationary kriging method where the covariance kernel is obtained by combining a classical kernel with the proxy. Once the accurate simulator has been run for an initial subset of parameter fields and a kriging metamodel has been inferred, the predictive distributions of misfits for the remaining parameter fields can be used as a guide to select candidate parameter fields in a sequential way. The proposed algorithm, Proxy-based Kriging for Sequential Inversion (ProKSI), relies on a variant of the Expected Improvement, a popular criterion for kriging-based global optimization. A statistical benchmark of ProKSI’s performances illustrates the efficiency and the robustness of the approach when using different kinds of proxies.
Resumo:
2002 Mathematics Subject Classification: 65C05
Resumo:
Kozlov & Maz'ya (1989, Algebra Anal., 1, 144–170) proposed an alternating iterative method for solving Cauchy problems for general strongly elliptic and formally self-adjoint systems. However, in many applied problems, operators appear that do not satisfy these requirements, e.g. Helmholtz-type operators. Therefore, in this study, an alternating procedure for solving Cauchy problems for self-adjoint non-coercive elliptic operators of second order is presented. A convergence proof of this procedure is given.
Resumo:
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this work, the problem in the loads transport (in platforms or suspended by cables) it is considered. The system in subject is composed for mono-rail system and was modeled through the system: inverted pendulum, car and motor and the movement equations were obtained through the Lagrange equations. In the model, was considered the interaction among of the motor and system dynamics for several potencies motor, that is, the case studied is denominated a non-ideal periodic problem. The non-ideal periodic problem dynamics was analyzed, qualitatively, through the comparison of the stability diagrams, numerically obtained, for several motor torque constants. Furthermore, one was made it analyzes quantitative of the problem through the analysis of the Floquet multipliers. Finally, the non-ideal problem was controlled. The method that was used for analysis and control of non-ideal periodic systems is based on the Chebyshev polynomial expansion, in the Picard iterative method and in the Lyapunov-Floquet transformation (L-F trans formation). This method was presented recently in [3-9].
Resumo:
The Fitzhugh-Nagumo (fn) mathematical model characterizes the action potential of the membrane. The dynamics of the Fitzhugh-Nagumo model have been extensively studied both with a view to their biological implications and as a test bed for numerical methods, which can be applied to more complex models. This paper deals with the dynamics in the (FH) model. Here, the dynamics are analyzed, qualitatively, through the stability diagrams to the action potential of the membrane. Furthermore, we also analyze quantitatively the problem through the evaluation of Floquet multipliers. Finally, the nonlinear periodic problem is controlled, based on the Chebyshev polynomial expansion, the Picard iterative method and on Lyapunov-Floquet transformation (L-F transformation).
Resumo:
En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.
Resumo:
This paper presents a new selective and non-directional protection method to detect ground faults in neutral isolated power systems. The new proposed method is based on the comparison of the rms value of the residual current of all the lines connected to a bus, and it is able to determine the line with ground defect. Additionally, this method can be used for the protection of secondary substation. This protection method avoids the unwanted trips produced by wrong settings or wiring errors, which sometimes occur in the existing directional ground fault protections. This new method has been validated through computer simulations and experimental laboratory tests.