994 resultados para SUGGESTED METHODS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The water content dynamics in the upper soil surface during evaporation is a key element in land-atmosphere exchanges. Previous experimental studies have suggested that the soil water content increases at the depth of 5 to 15 cm below the soil surface during evapo- ration, while the layer in the immediate vicinity of the soil surface is drying. In this study, the dynamics of water content profiles exposed to solar radiative forcing was monitored at a high temporal resolution using dielectric methods both in the presence and absence of evaporation. A 4-d comparison of reported moisture content in coarse sand in covered and uncovered buckets using a commercial dielectric-based probe (70 MHz ECH2O-5TE, Decagon Devices, Pullman, WA) and the standard 1-GHz time domain reflectometry method. Both sensors reported a positive correlation between temperature and water content in the 5- to 10-cm depth, most pronounced in the morning during heating and in the afternoon during cooling. Such positive correlation might have a physical origin induced by evaporation at the surface and redistribution due to liquid water fluxes resulting from the temperature- gradient dynamics within the sand profile at those depths. Our experimental data suggest that the combined effect of surface evaporation and temperature-gradient dynamics should be considered to analyze experimental soil water profiles. Additional effects related to the frequency of operation and to protocols for temperature compensation of the dielectric sensors may also affect the probes' response during large temperature changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Main purpose of this thesis is to introduce a new lossless compression algorithm for multispectral images. Proposed algorithm is based on reducing the band ordering problem to the problem of finding a minimum spanning tree in a weighted directed graph, where set of the graph vertices corresponds to multispectral image bands and the arcs’ weights have been computed using a newly invented adaptive linear prediction model. The adaptive prediction model is an extended unification of 2–and 4–neighbour pixel context linear prediction schemes. The algorithm provides individual prediction of each image band using the optimal prediction scheme, defined by the adaptive prediction model and the optimal predicting band suggested by minimum spanning tree. Its efficiency has been compared with respect to the best lossless compression algorithms for multispectral images. Three recently invented algorithms have been considered. Numerical results produced by these algorithms allow concluding that adaptive prediction based algorithm is the best one for lossless compression of multispectral images. Real multispectral data captured from an airplane have been used for the testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing interest to seek new enzyme preparations for the development of new products derived from bioprocesses to obtain alternative bio-based materials. In this context, four non-commercial lipases from Pseudomonas species were prepared, immobilized on different low-cost supports, and examined for potential biotechnological applications. Results: To reduce costs of eventual scaling-up, the new lipases were obtained directly from crude cell extracts or from growth culture supernatants, and immobilized by simple adsorption on Accurel EP100, Accurel MP1000 and Celite (R) 545. The enzymes evaluated were LipA and LipC from Pseudomonas sp. 42A2, a thermostable mutant of LipC, and LipI. 3 from Pseudomonas CR611, which were produced in either homologous or heterologous hosts. Best immobilization results were obtained on Accurel EP100 for LipA and on Accurel MP1000 for LipC and its thermostable variant. Lip I. 3, requiring a refolding step, was poorly immobilized on all supports tested ( best results for Accurel MP1000). To test the behavior of immobilized lipases, they were assayed in triolein transesterification, where the best results were observed for lipases immobilized on Accurel MP1000. Conclusions: The suggested protocol does not require protein purification and uses crude enzymes immobilized by a fast adsorption technique on low-cost supports, which makes the method suitable for an eventual scaling up aimed at biotechnological applications. Therefore, a fast, simple and economic method for lipase preparation and immobilization has been set up. The low price of the supports tested and the simplicity of the procedure, skipping the tedious and expensive purification steps, will contribute to cost reduction in biotechnological lipase-catalyzed processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Throughout history indigo was derived from various plants for example Dyer’s Woad (Isatis tinctoria L.) in Europe. In the 19th century were the synthetic dyes developed and nowadays indigo is mainly synthesized from by-products of fossil fuels. Indigo is a so-called vat dye, which means that it needs to be reduced to its water soluble leucoform before dyeing. Nowadays, most of the industrial reduction is performed chemically by sodium dithionite. However, this is considered environmentally unfavourable because of waste waters contaminating degradation products. Therefore there has been interest to find new possibilities to reduce indigo. Possible alternatives for the application of dithionite as the reducing agent are biologically induced reduction and electrochemical reduction. Glucose and other reducing sugars have recently been suggested as possible environmentally friendly alternatives as reducing agents for sulphur dyes and there have also been interest in using glucose to reduce indigo. In spite of the development of several types of processes, very little is known about the mechanism and kinetics associated with the reduction of indigo. This study aims at investigating the reduction and electrochemical analysis methods of indigo and give insight on the reduction mechanism of indigo. Anthraquinone as well as it’s derivative 1,8-dihydroxyanthraquinone were discovered to act as catalysts for the glucose induced reduction of indigo. Anthraquinone introduces a strong catalytic effect which is explained by invoking a molecular “wedge effect” during co-intercalation of Na+ and anthraquinone into the layered indigo crystal. The study includes also research on the extraction of plant-derived indigo from woad and the examination of the effect of this method to the yield and purity of indigo. The purity has been conventionally studied spectrophotometrically and a new hydrodynamic electrode system is introduced in this study. A vibrating probe is used in following electrochemically the leuco-indigo formation with glucose as a reducing agent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays software testing and quality assurance have a great value in software development process. Software testing does not mean a concrete discipline, it is the process of validation and verification that starts from the idea of future product and finishes at the end of product’s maintenance. The importance of software testing methods and tools that can be applied on different testing phases is highly stressed in industry. The initial objectives for this thesis were to provide a sufficient literature review on different testing phases and for each of the phases define the method that can be effectively used for improving software’s quality. Software testing phases, chosen for study are: unit testing, integration testing, functional testing, system testing, acceptance testing and usability testing. The research showed that there are many software testing methods that can be applied at different phases and in the most of the cases the choice of the method should be done depending on software type and its specification. In the thesis the problem, concerned to each of the phases was identified; the method that can help in eliminating this problem was suggested and particularly described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Singular Value Decomposition (SVD), Principal Component Analysis (PCA) and Multiple Linear Regression (MLR) are some of the mathematical pre- liminaries that are discussed prior to explaining PLS and PCR models. Both PLS and PCR are applied to real spectral data and their di erences and similarities are discussed in this thesis. The challenge lies in establishing the optimum number of components to be included in either of the models but this has been overcome by using various diagnostic tools suggested in this thesis. Correspondence analysis (CA) and PLS were applied to ecological data. The idea of CA was to correlate the macrophytes species and lakes. The di erences between PLS model for ecological data and PLS for spectral data are noted and explained in this thesis. i

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic approximation methods for stochastic optimization are considered. Reviewed the main methods of stochastic approximation: stochastic quasi-gradient algorithm, Kiefer-Wolfowitz algorithm and adaptive rules for them, simultaneous perturbation stochastic approximation (SPSA) algorithm. Suggested the model and the solution of the retailer's profit optimization problem and considered an application of the SQG-algorithm for the optimization problems with objective functions given in the form of ordinary differential equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractThe combined effects of tumbling marination methods (Vacuum continuous tumbling marination, CT; Vacuum intermittent tumbling marination, IT) and effective tumbling time (4, 6, 8 and 10 h) on quality characteristics of prepared boneless pork chops were investigated. The results showed that regardless of tumbling time, CT method significantly increased the pH, product yield, cohesiveness, resilience, sensory tenderness and overall flavor (p<0.05) compared with IT method, and CT method also significantly decreased the pressing loss, cooking loss, shear force value (SFV), hardness and chewiness (p<0.05) compared with IT method. With the effective tumbling time increasing from 4 h to 10 h, the product yield and sensory attributes of prepared pork chops increased at first and then decreased, whereas the pressing loss, cooking loss, SFV, hardness and chewiness decreased at first and then increased. Additionally, an interaction between CT method and effective tumbling time was also observed. These results suggested that CT method of 8 h obtained the best quality characteristics of prepared pork chops, which should be adopted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ferrospinels of nickel, cobalt and copper and their sulphated analogues were prepared by the room temperature coprecipitation route to yield samples with high surface areas. The intrinsic acidity among the ferrites was found to decrease in the order: cobalt> nickel> copper. Sulphation caused an increase in the number of weak and medium strong acid sites, whereas the strong acid sites were left unaffected. Electron donor studies revealed that copper ferrite has both the highest proportion of strong sites and the lowest proportion of weak basic sites. All the ferrite samples proved to be good catalysts for the benzoy lation of toluene with benzoyl chloride. copper and cobalt ferrites being much more active than nickel ferrite. The catalytic activity for benzoylation was not much influenced by sulphation, but it increased remarkably with calcination temperature of the catalyst. Surface Lewis acid sites, provided by the octahedral cations on the spinel surface, are suggested to be responsible for the catalytic activity for the benzoylation reaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In any discipline, where uncertainty and variability are present, it is important to have principles which are accepted as inviolate and which should therefore drive statistical modelling, statistical analysis of data and any inferences from such an analysis. Despite the fact that two such principles have existed over the last two decades and from these a sensible, meaningful methodology has been developed for the statistical analysis of compositional data, the application of inappropriate and/or meaningless methods persists in many areas of application. This paper identifies at least ten common fallacies and confusions in compositional data analysis with illustrative examples and provides readers with necessary, and hopefully sufficient, arguments to persuade the culprits why and how they should amend their ways