887 resultados para Methods engineering.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hulun Lake, China’s fifth-largest inland lake, experienced severe declines in water level in the period of 2000-2010. This has prompted concerns whether the lake is drying up gradually. A multi-million US dollar engineering project to construct a water channel to transfer part of the river flow from a nearby river to maintain the water level was completed in August 2010. This study aimed to advance the understanding of the key processes controlling the lake water level variation over the last five decades, as well as investigate the impact of the river transfer engineering project on the water level. A water balance model was developed to investigate the lake water level variations over the last five decades, using hydrological and climatic data as well as satellite-based measurements and results from land surface modelling. The investigation reveals that the severe reduction of river discharge (- 364±64 mm/yr, ~70% of the five-decade average) into the lake was the key factor behind the decline of the lake water level between 2000 and 2010. The decline of river discharge was due to the reduction of total runoff from the lake watershed. This was a result of the reduction of soil moisture due to the decrease of precipitation (-49±45 mm/yr) over this period. The water budget calculation suggests that the groundwater component from the surrounding lake area as well as surface run off from the un-gauged area surrounding the lake contributed ~ net 210 Mm3/yr (equivalent to ~ 100 mm/yr) water inflows into the lake. The results also show that the water diversion project did prevent a further water level decline of over 0.5 m by the end of 2012. Overall, the monthly water balance model gave an excellent prediction of the lake water level fluctuation over the last five decades and can be a useful tool to manage lake water resources in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lee M.H., Model-Based Reasoning: A Principled Approach for Software Engineering, Software - Concepts and Tools,19(4), pp179-189, 2000.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 5, pp. 1338-1343, 2003.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Very Long Baseline Interferometry (VLBI) polarisation observations of the relativistic jets from Active Galactic Nuclei (AGN) allow the magnetic field environment around the jet to be probed. In particular, multi-wavelength observations of AGN jets allow the creation of Faraday rotation measure maps which can be used to gain an insight into the magnetic field component of the jet along the line of sight. Recent polarisation and Faraday rotation measure maps of many AGN show possible evidence for the presence of helical magnetic fields. The detection of such evidence is highly dependent both on the resolution of the images and the quality of the error analysis and statistics used in the detection. This thesis focuses on the development of new methods for high resolution radio astronomy imaging in both of these areas. An implementation of the Maximum Entropy Method (MEM) suitable for multi-wavelength VLBI polarisation observations is presented and the advantage in resolution it possesses over the CLEAN algorithm is discussed and demonstrated using Monte Carlo simulations. This new polarisation MEM code has been applied to multi-wavelength imaging of the Active Galactic Nuclei 0716+714, Mrk 501 and 1633+382, in each case providing improved polarisation imaging compared to the case of deconvolution using the standard CLEAN algorithm. The first MEM-based fractional polarisation and Faraday-rotation VLBI images are presented, using these sources as examples. Recent detections of gradients in Faraday rotation measure are presented, including an observation of a reversal in the direction of a gradient further along a jet. Simulated observations confirming the observability of such a phenomenon are conducted, and possible explanations for a reversal in the direction of the Faraday rotation measure gradient are discussed. These results were originally published in Mahmud et al. (2013). Finally, a new error model for the CLEAN algorithm is developed which takes into account correlation between neighbouring pixels. Comparison of error maps calculated using this new model and Monte Carlo maps show striking similarities when the sources considered are well resolved, indicating that the method is correctly reproducing at least some component of the overall uncertainty in the images. The calculation of many useful quantities using this model is demonstrated and the advantages it poses over traditional single pixel calculations is illustrated. The limitations of the model as revealed by Monte Carlo simulations are also discussed; unfortunately, the error model does not work well when applied to compact regions of emission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quasi-Newton methods are applied to solve interface problems which arise from domain decomposition methods. These interface problems are usually sparse systems of linear or nonlinear equations. We are interested in applying these methods to systems of linear equations where we are not able or willing to calculate the Jacobian matrices as well as to systems of nonlinear equations resulting from nonlinear elliptic problems in the context of domain decomposition. Suitability for parallel implementation of these algorithms on coarse-grained parallel computers is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational results for the microwave heating of a porous material are presented in this paper. Combined finite difference time domain and finite volume methods were used to solve equations that describe the electromagnetic field and heat and mass transfer in porous media. The coupling between the two schemes is through a change in dielectric properties which were assumed to be dependent both on temperature and moisture content. The model was able to reflect the evolution of temperature and moisture fields as the moisture in the porous medium evaporates. Moisture movement results from internal pressure gradients produced by the internal heating and phase change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper, a 2-D non-linear electric arc-welding problem is considered. It is assumed that the moving arc generates an unknown quantity of energy which makes the problem an inverse problem with an unknown source. Robust algorithms to solve such problems e#ciently, and in certain circumstances in real-time, are of great technological and industrial interest. There are other types of inverse problems which involve inverse determination of heat conductivity or material properties [CDJ63][TE98], inverse problems in material cutting [ILPP98], and retrieval of parameters containing discontinuities [IK90]. As in the metal cutting problem, the temperature of a very hot surface is required and it relies on the use of thermocouples. Here, the solution scheme requires temperature measurements lied in the neighbourhood of the weld line in order to retrieve the unknown heat source. The size of this neighbourhood is not considered in this paper, but rather a domain decomposition concept is presented and an examination of the accuracy of the retrieved source are presented. This paper is organised as follows. The inverse problem is formulated and a method for the source retrieval is presented in the second section. The source retrieval method is based on an extension of the 1-D source retrieval method as proposed in [ILP].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unstructured grid meshes used in most commercial CFD codes inevitably adopt collocated variable solution schemes. These schemes have several shortcomings, mainly due to the interpolation of the pressure gradient, that lead to slow convergence. In this publication we show how it is possible to use a much more stable staggered mesh arrangement in an unstructured code. Several alternative groupings of variables are investigated in a search for the optimum scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solder joints are often the cause of failure in electronic devices, failing due to cyclic creep induced ductile fatigue. This paper will review the modelling methods available to predict the lifetime of SnPb and SnAgCu solder joints under thermo-mechanical cycling conditions such as power cycling, accelerated thermal cycling and isothermal testing, the methods do not apply to other damage mechanisms such as vibration or drop-testing. Analytical methods such as recommended by the IPC are covered, which are simple to use but limited in capability. Finite element modelling methods are reviewed, along with the necessary constitutive laws and fatigue laws for solder, these offer the most accurate predictions at the current time. Research on state-of-the-art damage mechanics methods is also presented, although these have not undergone enough experimental validation to be recommended at present

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a framework to integrate requirements management and design knowledge reuse. The research approach begins with a literature review in design reuse and requirements management to identify appropriate methods within each domain. A framework is proposed based on the identified requirements. The framework is then demonstrated using a case study example: vacuum pump design. Requirements are presented as a component of the integrated design knowledge framework. The proposed framework enables the application of requirements management as a dynamic process, including capture, analysis and recording of requirements. It takes account of the evolving requirements and the dynamic nature of the interaction between requirements and product structure through the various stages of product development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an approach for detecting local damage in large scale frame structures by utilizing regularization methods for ill-posed problems. A direct relationship between the change in stiffness caused by local damage and the measured modal data for the damaged structure is developed, based on the perturbation method for structural dynamic systems. Thus, the measured incomplete modal data can be directly adopted in damage identification without requiring model reduction techniques, and common regularization methods could be effectively employed to solve the developed equations. Damage indicators are appropriately chosen to reflect both the location and severity of local damage in individual components of frame structures such as in brace members and at beam-column joints. The Truncated Singular Value Decomposition solution incorporating the Generalized Cross Validation method is introduced to evaluate the damage indicators for the cases when realistic errors exist in modal data measurements. Results for a 16-story building model structure show that structural damage can be correctly identified at detailed level using only limited information on the measured noisy modal data for the damaged structure.