934 resultados para Experimental Methods.
Resumo:
The thesis aims to present the results of experimental investigations on the changes of optical properties of metallic thin films due to heating. The parameters which are measured are reflectivity, refractive indices and the ellipsometric quantities V and A . The materials used in the studies are metals like Silver, Aluminium and Copper. By applying the optical method the interdiffusion taking place in multilayer ‘films of Aluminium and Silver has also been studied. Special interest has been taken to reveal the mechanisms of the hillock growth and surface roughness caused by heating and their relation with the stress in the film
Resumo:
This paper presents the results of a field experiment conducted in Kerala, South India, to test the effectiveness of coir geotextiles for embankment protection. In the context of sustainable watershed management, coir is a cheap and locally available material 5 that can be used to strengthen traditional earthen bunds or protect the banks of village ponds from erosion. Particularly in developing countries, where coir is abundantly available and textiles can be produced by small-scale industry, this is an attractive alternative for conventional methods
Resumo:
This paper presents the results of a field experiment conducted in Kerala, South India, to test the effectiveness of coir geotextiles for embankment protection. The results reveal that treatment with geotextile in combination with grass is an effective eco-hydrological measure to protect steep slopes from erosion. In the context of sustainable watershed management, coir is a cheap and locally available material that can be used to strengthen traditional earthen bunds or protect the banks of village ponds from erosion. Particularly in developing countries, where coir is abundantly available and textiles can be produced by small-scale industry, this is an attractive alternative for conventional methods. The paper analyses the performance of different treatments with regard to soil moisture content, protection against erosion and biomass production
Resumo:
Since dwarf napiergrass (Pennisetum purpureum Schumach.) must be propagated vegetatively due to lack of viable seeds, root splitting and stem cuttings are generally used to obtain true-to-type plant populations. These ordinary methods are laborious and costly, and are the greatest barriers for expanding the cultivation area of this crop. The objectives of this research were to develop nursery production of dwarf napiergrass in cell trays and to compare the efficiency of mechanical versus manual methods for cell-tray propagation and field transplanting. After defoliation of herbage either by a sickle (manually) or hand-mowing machine, every potential aerial tiller bud was cut to a single one for transplanting into cell trays as stem cuttings and placed in a glasshouse over winter. The following June, nursery plants were trimmed to a 25–cm length and transplanted in an experimental field (sandy soil) with 20,000 plants ha^(−1) either by shovel (manually) or Welsh onion planter. Labour time was recorded for each process. The manual defoliation of plants required 44% more labour time for preparing the stem cuttings (0.73 person-min. stemcutting^(−1)) compared to using hand-mowing machinery (0.51 person-min. stem-cutting^(−1)). In contrast, labour time for transplanting required an extra 0.30 person-min. m^(−2) (14%) using the machinery compared to manual transplanting, possibly due to the limited plot size for machinery operation. The transplanting method had no significant effect on plant establishment or plant growth, except for herbage yield 110 days after planting. Defoliation of herbage by machinery, production using a cell-tray nursery and mechanical transplanting reduced the labour intensity of dwarf napiergrass propagation.
Resumo:
Market prices are well known to efficiently collect and aggregate diverse information regarding the value of commodities and assets. The role of markets has been particularly suitable to pricing financial securities. This article provides an alternative application of the pricing mechanism to marketing research - using pseudo-securities markets to measure preferences over new product concepts. Surveys, focus groups, concept tests and conjoint studies are methods traditionally used to measure individual and aggregate preferences. Unfortunately, these methods can be biased, costly and time-consuming to conduct. The present research is motivated by the desire to efficiently measure preferences and more accurately predict new product success, based on the efficiency and incentive-compatibility of security trading markets. The article describes a novel market research method, pro-vides insight into why the method should work, and compares the results of several trading experiments against other methodologies such as concept testing and conjoint analysis.
Resumo:
This investigation characterized families of adolescents experimenting with psychoactive substances (PAS) consumption. Materials and methods: For this purpose, a qualitative study with a hermeneutical emphasis was conducted among a population of adolescents between the ages of 12 and 17 who have experimented with PAS. Semi-structured interviews were conducted with patients and their families employing a flexible protocol of 14 categories. Results: The findings showed low levels of family cohesion and sense of family identity, inconsistency between educational patterns followed by the parents, as well as deficient parental support. Similarly, the findings indicate significant peer influence during the first stages of consumption of illegal substances. In this regard, the findings suggest that more than providing physical satisfaction, consumption represents a form of acquiring prestige and social position while granting a sensation of psychological, emotional and social well-being. Conclusions: Parental influence was also found considerable in regarding the consumption of legal PAS, like alcohol and tobacco. The study identified as a high-priority need to promote and incorporate communication and conflict resolution skills within the family dynamics by means of prevention and monitoring programs. Those skills and programs would be aimed at providing parents of adolescents experimenting with PAS consumption with new educational tools to orientate new raising guidelines so as to respond appropriately to the problems identified in this study.
Resumo:
The [2+2+2] cycloaddition reaction involves the formation of three carbon-carbon bonds in one single step using alkynes, alkenes, nitriles, carbonyls and other unsaturated reagents as reactants. This is one of the most elegant methods for the construction of polycyclic aromatic compounds and heteroaromatic, which have important academic and industrial uses. The thesis is divided into ten chapters including six related publications. The first study based on the Wilkinson’s catalyst, RhCl(PPh3)3, compares the reaction mechanism of the [2+2+2] cycloaddition process of acetylene with the cycloaddition obtained for the model of the complex, RhCl(PH3)3. In an attempt to reduce computational costs in DFT studies, this research project aimed to substitute PPh3 ligands for PH3, despite the electronic and steric effects produced by PPh3 ligands being significantly different to those created by PH3 ones. In this first study, detailed theoretical calculations were performed to determine the reaction mechanism of the two complexes. Despite some differences being detected, it was found that modelling PPh3 by PH3 in the catalyst helps to reduce the computational cost significantly while at the same time providing qualitatively acceptable results. Taking into account the results obtained in this earlier study, the model of the Wilkinson’s catalyst, RhCl(PH3)3, was applied to study different [2+2+2] cycloaddition reactions with unsaturated systems conducted in the laboratory. Our research group found that in the case of totally closed systems, specifically 15- and 25-membered azamacrocycles can afford benzenic compounds, except in the case of 20-membered azamacrocycle (20-MAA) which was inactive with the Wilkinson’s catalyst. In this study, theoretical calculations allowed to determine the origin of the different reactivity of the 20-MAA, where it was found that the activation barrier of the oxidative addition of two alkynes is higher than those obtained for the 15- and 25-membered macrocycles. This barrier was attributed primarily to the interaction energy, which corresponds to the energy that is released when the two deformed reagents interact in the transition state. The main factor that helped to provide an explanation to the different reactivity observed was that the 20-MAA had a more stable and delocalized HOMO orbital in the oxidative addition step. Moreover, we observed that the formation of a strained ten-membered ring during the cycloaddition of 20-MAA presents significant steric hindrance. Furthermore, in Chapter 5, an electrochemical study is presented in collaboration with Prof. Anny Jutand from Paris. This work allowed studying the main steps of the catalytic cycle of the [2+2+2] cycloaddition reaction between diynes with a monoalkyne. First kinetic data were obtained of the [2+2+2] cycloaddition process catalyzed by the Wilkinson’s catalyst, where it was observed that the rate-determining step of the reaction can change depending on the structure of the starting reagents. In the case of the [2+2+2] cycloaddition reaction involving two alkynes and one alkene in the same molecule (enediynes), it is well known that the oxidative coupling may occur between two alkynes giving the corresponding metallacyclopentadiene, or between one alkyne and the alkene affording the metallacyclopentene complex. Wilkinson’s model was used in DFT calculations to analyze the different factors that may influence in the reaction mechanism. Here it was observed that the cyclic enediynes always prefer the oxidative coupling between two alkynes moieties, while the acyclic cases have different preferences depending on the linker and the substituents used in the alkynes. Moreover, the Wilkinson’s model was used to explain the experimental results achieved in Chapter 7 where the [2+2+2] cycloaddition reaction of enediynes is studied varying the position of the double bond in the starting reagent. It was observed that enediynes type yne-ene-yne preferred the standard [2+2+2] cycloaddition reaction, while enediynes type yne-yne-ene suffered β-hydride elimination followed a reductive elimination of Wilkinson’s catalyst giving cyclohexadiene compounds, which are isomers from those that would be obtained through standard [2+2+2] cycloaddition reactions. Finally, the last chapter of this thesis is based on the use of DFT calculations to determine the reaction mechanism when the macrocycles are treated with transition metals that are inactive to the [2+2+2] cycloaddition reaction, but which are thermally active leading to new polycyclic compounds. Thus, a domino process was described combining an ene reaction and a Diels-Alder cycloaddition.
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.
Experimental comparison of the comprehensibility of a Z specification and its implementation in Java
Resumo:
Comprehensibility is often raised as a problem with formal notations, yet formal methods practitioners dispute this. In a survey, one interviewee said 'formal specifications are no more difficult to understand than code'. Measurement of comprehension is necessarily comparative and a useful comparison for a specification is against its implementation. Practitioners have an intuitive feel for the comprehension of code. A quantified comparison will transfer this feeling to formal specifications. We performed an experiment to compare the comprehension of a Z specification with that of its implementation in Java. The results indicate there is little difference in comprehensibility between the two. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Objectives. Theoretic modeling and experimental studies suggest that functional electrical stimulation (FES) can improve trunk balance in spinal cord injured subjects. This can have a positive impact on daily life, increasing the volume of bimanual workspace, improving sitting posture, and wheelchair propulsion. A closed loop controller for the stimulation is desirable, as it can potentially decrease muscle fatigue and offer better rejection to disturbances. This paper proposes a biomechanical model of the human trunk, and a procedure for its identification, to be used for the future development of FES controllers. The advantage over previous models resides in the simplicity of the solution proposed, which makes it possible to identify the model just before a stimulation session ( taking into account the variability of the muscle response to the FES). Materials and Methods. The structure of the model is based on previous research on FES and muscle physiology. Some details could not be inferred from previous studies, and were determined from experimental data. Experiments with a paraplegic volunteer were conducted in order to measure the moments exerted by the trunk-passive tissues and artificially stimulated muscles. Data for model identification and validation also were collected. Results. Using the proposed structure and identification procedure, the model could adequately reproduce the moments exerted during the experiments. The study reveals that the stimulated trunk extensors can exert maximal moment when the trunk is in the upright position. In contrast, previous studies show that able-bodied subjects can exert maximal trunk extension when flexed forward. Conclusions. The proposed model and identification procedure are a successful first step toward the development of a model-based controller for trunk FES. The model also gives information on the trunk in unique conditions, normally not observable in able-bodied subjects (ie, subject only to extensor muscles contraction).
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.