969 resultados para Pelczynski`s decomposition method
Resumo:
Due to wide range of interest in use of bio-economic models to gain insight into the scientific management of renewable resources like fisheries and forestry,variational iteration method (VIM) is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort prey harvesting.The results are compared with the results obtained by Adomian decomposition method and reveal that VIM is very effective and convenient for solving nonlinear differential equations.
Resumo:
We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
Compounds of the Y3-x Ba3+x Cu6O14+δ system, which YBa2Cu3O7-δ (x = 1) is member, have been prepared. A relatively low temperature nitrate decomposition method gives almost single phase compounds with tetragonal structure. The phases are metastable and show superconducting transitions (zero-resistance) around 50K.
Resumo:
LiNi1/3Mn1/3Co1/3O2, a high voltage and high-capacity cathode material for Li-ion batteries, has been synthesized by three different rapid synthetic methods. viz. nitrate-melt decomposition, combustion and sol-gel methods. The first two methods are ultra rapid and a time period as small as 15 min is sufficient to prepare nano-crystalline LiNi1/3Mn1/3Co1/3O2. The processing parameters in obtaining the best performing materials are optimized for each process and their electrochemical performance is evaluated in Li-ion cells. The combustion-derived LiNi1/3Mn1/3Co1/3O2 sample exhibits large extent of cation mixing (10%) while the other two methods yield LiNi1/3Mn1/3Co1/3O2 with cation mixing <5%. LiNi1/3Mn1/3Co1/3O2 prepared by nitrate-melt decomposition method exhibits superior performance as Li-ion battery cathode material.
Resumo:
Tutkimuksessa mitataan porsastuotannon tuottavuuden kehitystä ProAgrian sikatilinpäätöstiloilla vuosina 2003–2008. Tuottavuutta mitataan Fisher-tuottavuusindeksillä, joka dekomponoidaan tekniseen, allokatiiviseen ja skaalatehokkuuteen sekä teknologiseen kehitykseen ja hintavaikutukseen. Koko aineistosta aggregoidulla tuottavuusindeksillä mitattuna tuottavuus kasvoi viidessä vuodessa yhteensä 14,3 % vuotuisen kasvun ollessa 2,7 %. Tuottajien keskimääräinen tuottavuusindeksi antaa lähes saman tuloksen: sen mukaan tuottavuus kasvaa yhteensä 14,7 %, mikä tekee 2,8 % vuodessa. Skaalatehokkuuden paraneminen havaitaan merkittävimmäksi tuottavuuskasvun lähteeksi. Skaalatehokkuus paranee aggregoidusti mitattuna 1,6 % vuodessa ja tiloilla keskimäärin 2,1 % vuodessa. Teknisen tehokkuuden koheneminen on toinen tuottavuuskasvua edistävä tekijä tutkimusjaksolla. Molemmilla mittaustavoilla nousu on keskimäärin 1,4 % vuodessa. Allokatiivinen tehokkuus laskee hieman: aggregoidusti mitattuna 0,1 % ja keskimäärin 0,4 % vuodessa. Teknologinen kehitys tutkimusjaksolla on lievästi negatiivista, keskimäärin -0,1 % vuodessa. Vuosittaiset vaihtelut ovat kuitenkin voimakkaita. Hintojen muutokset eivät juuri ole vaikuttaneet tuottavuuden tasoon, sillä hintavaikutuksen vuotuiset muutokset jäävät jokaisena vuonna alle puoleen prosenttiin ja keskimääräinen vuotuinen muutos on -0,1 %. Keskeinen tuottavuuskasvua edistänyt tekijä näyttää olleen tilakoon kasvu, joka on parantanut rakenteellista tehokkuutta. Teknologisen kehityksen jääminen negatiiviseksi kuitenkin tarkoittaa, että paras havaittu tuottavuuden taso ei ole noussut lainkaan.
Resumo:
The compounds YBa2−xLaxCu3Oy, with compositions (0
Resumo:
O presente trabalho tem por objetivo monitorar o agrotóxico Mancozeb no solo em diferentes sistemas de plantios de tomate utilizando a metodologia de decomposição dos ditiocarbamatos (DTCs) com geração de dissulfeto de carbono (CS2). Este método é amplamente utilizado na determinação dos resíduos de DTCs em alimentos, tendo sido adaptado para trabalhar com amostras de solo artificiais e reais. O método foi avaliado utilizando amostras contaminadas artificialmente a partir de uma amostra de solo controle, proveniente da Amazônia. A contaminação foi realizada com uma solução de campo (2 g.L-1 em água) do agrotóxico Manzate 800 (Mancozeb). A partir do momento em que foram determinadas as condições ideais de operação do método de decomposição dos DTCs, analisou-se o teor de Mancozeb em amostras reais, provenientes de uma área cultivada com tomate, no Município de São José de Ubá (RJ), sob sistemas de Plantios Convencional, Mínimo e Direto. Foi possível constatar a presença de teores de Mancozeb nas amostras reais de solo em estudo, coletadas nas profundidades 0 - 5 cm; 5 - 10 cm; 10 - 20 cm e 20 - 40 cm. Os resultados mostraram que os solos provenientes dos sistemas convencional e mínimo apresentaram, na camada superficial, um teor de Mancozeb de (7,44 mg.kg-1 e 5,70 mg.kg-1) superior ao obtido no sistema direto, que apresentou teores de (1,14 mg.kg-1 e 1,95 mg.kg-1) de Mancozeb
Resumo:
The effects of multiple scattering on acoustic manipulation of spherical particles using helicoidal Bessel-beams are discussed. A closed-form analytical solution is developed to calculate the acoustic radiation force resulting from a Bessel-beam on an acoustically reflective sphere, in the presence of an adjacent spherical particle, immersed in an unbounded fluid medium. The solution is based on the standard Fourier decomposition method and the effect of multi-scattering is taken into account using the addition theorem for spherical coordinates. Of particular interest here is the investigation of the effects of multiple scattering on the emergence of negative axial forces. To investigate the effects, the radiation force applied on the target particle resulting from a helicoidal Bessel-beam of different azimuthal indexes (m = 1 to 4), at different conical angles, is computed. Results are presented for soft and rigid spheres of various sizes, separated by a finite distance. Results have shown that the emergence of negative force regions is very sensitive to the level of cross-scattering between the particles. It has also been shown that in multiple scattering media, the negative axial force may occur at much smaller conical angles than previously reported for single particles, and that acoustic manipulation of soft spheres in such media may also become possible.
Resumo:
Novel one-dimensional europium benzene-1,3,5-tricarboxylate compressed nanorods have been synthesized oil it large scale through direct precipitation in solution phase under moderate conditions without the assistance of any surfactant, catalyst, or template. The obtained nanorods have widths of about 50-100 not, thicknesses of 10-20 nm, and lengths ranging from a few hundred nanometers to several micrometers. X-ray powder diffraction. elemental analysis, Fourier transform infrared Studies, and thermogravimetric and differential thermal analysis show that the nanorods have the structural formula of Eu(1,3,5-BTC)center dot 6H(2)O. Upon UV excitation, these nanorods exhibit a highly efficient luminescence. which comes from the Eu3+ ions. Moreover, Eu2O3 nanorods Could also be obtained via a thermal decomposition method using the corresponding complex as a precursor. This synthetic route is promising for the preparation of other one-dimensional crystalline nanomaterials because of its simplicity and the low cost of the starting reagents.
Resumo:
Over expression of cyclin A in human tumors has been linked to cancer by various experimental lines of evidence. However, physical and spectral characterization of the human cyclin A gene and its interactions with anticancer drugs have not been reported. Our gene sequence analysis, singular value decomposition method and melting studies in the presence of antitumor agents, daunomycin, doxorubicin and Hoechst 33258 showed that cyclin A gene had both AT-rich and GC-rich domains. For a ligand with unknown DNA binding specificity, this gene sequence can be used to differentiate its DNA binding preference.
Resumo:
In this paper, based on the E & P situation in the oilfield and the theory of geophysical exploration, a series researches are conducted on fracture reservoir prediction technology in general,and it especially focus on some difficult points. The technological series which integrated amplitude preserved data processing、interpretation and its comprehensive application research as a whole were developed and this new method can be applied to the other similar oilfield exploration and development. The contents and results in this paper are listed as follows: 1. An overview was given on the status and development of fracture reservoir estimation technique, compare and analyze those geophysical prediction methods. This will be very helpful to the similar reservoir researches. 2. Analyze and conclude the characters of geologies and well logging response of burial hills fracture reservoir, those conclusions are used to steer the geophysical research and get satisfying results. 3. Forward modeling anisotropy seismic response of fracture reservoir. Quantitatively describe the azimuthal amplitude variation. Amplitude ellipse at each incidence angle is used to identify the fracture orientation. 4. Numerical simulation of structure stress based on finite difference method is carried out. Quantitatively describe and analyze the direction and intensity of fracture. 5. Conventional attributes extraction of amplitude preserved seismic data、attributes with different azimuthal angle and different offset are used to determine the relationship between the results and fracture distribution. 6. With spectrum decomposition method based on wavelet transform, the author disclose the reservoir distribution in space. It is a powerful tool to display its anisotropy. 7. Integrated seismic wave impendence、elastic impendence、spectrum decomposition、attribute extraction、fracture analysis result as a whole to identify and evaluate the fracture reservoir. An optimum workflow is constructed. It is used to practical oil&gas production and good results are obtained. This can indicate the wide foreground of this technique series.
Resumo:
Offshore seismic exploration is full of high investment and risk. And there are many problems, such as multiple. The technology of high resolution and high S/N ratio on marine seismic data processing is becoming an important project. In this paper, the technology of multi-scale decomposition on both prestack and poststack seismic data based on wavelet and Hilbert-Huang transform and the theory of phase deconvolution is proposed by analysis of marine seismic exploration, investigation and study of literatures, and integration of current mainstream and emerging technology. Related algorithms are studied. The Pyramid algorithm of decomposition and reconstruction had been given by the Mallat algorithm of discrete wavelet transform In this paper, it is introduced into seismic data processing, the validity is shown by test with field data. The main idea of Hilbert-Huang transform is the empirical mode decomposition with which any complicated data set can be decomposed into a finite and often small number of intrinsic mode functions that admit well-behaved Hilbert transform. After the decomposition, a analytical signal is constructed by Hilbert transform, from which the instantaneous frequency and amplitude can be obtained. And then, Hilbert spectrum. This decomposition method is adaptive and highly efficient. Since the decomposition is based on the local characteristics of the time scale of data, it is applicable to nonlinear and non-stationary processes. The phenomenons of fitting overshoot and undershoot and end swings are analyzed in Hilbert-Huang transform. And these phenomenons are eliminated by effective method which is studied in the paper. The technology of multi-scale decomposition on both prestack and poststack seismic data can realize the amplitude preserved processing, enhance the seismic data resolution greatly, and overcome the problem that different frequency components can not restore amplitude properly uniformly in the conventional method. The method of phase deconvolution, which has overcome the minimum phase limitation in traditional deconvolution, approached the base fact well that the seismic wavelet is phase mixed in practical application. And a more reliable result will be given by this method. In the applied research, the high resolution relative amplitude preserved processing result has been obtained by careful analysis and research with the application of the methods mentioned above in seismic data processing in four different target areas of China Sea. Finally, a set of processing flow and method system was formed in the paper, which has been carried on in the application in the actual production process and has made the good progress and the huge economic benefit.
Resumo:
Numerical modeling of groundwater is very important for understanding groundwater flow and solving hydrogeological problem. Today, groundwater studies require massive model cells and high calculation accuracy, which are beyond single-CPU computer’s capabilities. With the development of high performance parallel computing technologies, application of parallel computing method on numerical modeling of groundwater flow becomes necessary and important. Using parallel computing can improve the ability to resolve various hydro-geological and environmental problems. In this study, parallel computing method on two main types of modern parallel computer architecture, shared memory parallel systems and distributed shared memory parallel systems, are discussed. OpenMP and MPI (PETSc) are both used to parallelize the most widely used groundwater simulator, MODFLOW. Two parallel solvers, P-PCG and P-MODFLOW, were developed for MODFLOW. The parallelized MODFLOW was used to simulate regional groundwater flow in Beishan, Gansu Province, which is a potential high-level radioactive waste geological disposal area in China. 1. The OpenMP programming paradigm was used to parallelize the PCG (preconditioned conjugate-gradient method) solver, which is one of the main solver for MODFLOW. The parallel PCG solver, P-PCG, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. The largest test model has 1000 columns, 1000 rows and 1000 layers. Based on the timing results, execution times using the P-PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. 2. P-MODFLOW, a domain decomposition–based model implemented in a parallel computing environment is developed, which allows efficient simulation of a regional-scale groundwater flow. The basic approach partitions a large model domain into any number of sub-domains. Parallel processors are used to solve the model equations within each sub-domain. The use of domain decomposition method to achieve the MODFLOW program distributed shared memory parallel computing system will process the application of MODFLOW be extended to the fleet of the most popular systems, so that a large-scale simulation could take full advantage of hundreds or even thousands parallel processors. P-MODFLOW has a good parallel performance, with the maximum speedup of 18.32 (14 processors). Super linear speedups have been achieved in the parallel tests, indicating the efficiency and scalability of the code. Parallel program design, load balancing and full use of the PETSc were considered to achieve a highly efficient parallel program. 3. The characterization of regional ground water flow system is very important for high-level radioactive waste geological disposal. The Beishan area, located in northwestern Gansu Province, China, is selected as a potential site for disposal repository. The area includes about 80000 km2 and has complicated hydrogeological conditions, which greatly increase the computational effort of regional ground water flow models. In order to reduce computing time, parallel computing scheme was applied to regional ground water flow modeling. Models with over 10 million cells were used to simulate how the faults and different recharge conditions impact regional ground water flow pattern. The results of this study provide regional ground water flow information for the site characterization of the potential high-level radioactive waste disposal.
Resumo:
In modem signal Processing,non-linear,non-Gaussian and non-stable signals are usually the analyzed and Processed objects,especially non-stable signals. The convention always to analyze and Process non-stable signals are: short time Fourier transform,Wigner-Ville distribution,wavelet Transform and so on. But the above three algorithms are all based on Fourier Transform,so they all have the shortcoming of Fourier Analysis and cannot get rid of the localization of it. Hilbert-Huang Transform is a new non-stable signal processing technology,proposed by N. E. Huang in 1998. It is composed of Empirical Mode Decomposition (referred to as EMD) and Hilbert Spectral Analysis (referred to as HSA). After EMD Processing,any non-stable signal will be decomposed to a series of data sequences with different scales. Each sequence is called an Intrinsic Mode Function (referred to as IMF). And then the energy distribution plots of the original non-stable signal can be found by summing all the Hilbert spectrums of each IMF. In essence,this algorithm makes the non-stable signals become stable and decomposes the fluctuations and tendencies of different scales by degrees and at last describes the frequency components with instantaneous frequency and energy instead of the total frequency and energy in Fourier Spectral Analysis. In this case,the shortcoming of using many fake harmonic waves to describe non-linear and non-stable signals in Fourier Transform can be avoided. This Paper researches in the following parts: Firstly,This paper introduce the history and development of HHT,subsequently the characters and main issues of HHT. This paper briefly introduced the basic realization principles and algorithms of Hilbert-Huang transformation and confirms its validity by simulations. Secondly, This paper discuss on some shortcoming of HHT. By using FFT interpolation, we solve the problem of IMF instability and instantaneous frequency undulate which are caused by the insufficiency of sampling rate. As to the bound effect caused by the limitation of envelop algorithm of HHT, we use the wave characteristic matching method, and have good result. Thirdly, This paper do some deeply research on the application of HHT in electromagnetism signals processing. Based on the analysis of actual data examples, we discussed its application in electromagnetism signals processing and noise suppression. Using empirical mode decomposition method and multi-scale filter characteristics can effectively analyze the noise distribution of electromagnetism signal and suppress interference processing and information interpretability. It has been founded that selecting electromagnetism signal sessions using Hilbert time-frequency energy spectrum is helpful to improve signal quality and enhance the quality of data.