996 resultados para Experimental Problems
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
A new approach for teaching in basic experimental organic chemistry is presented. Experimental work goes on parallel to theoretical lectures leading to an immediate application of theoretical concepts transmitted therein. One day/week is dedicated exclusively to the organic laboratory. Reactions are proposed as problems to be solved; the student has to deduce the structure of the product on the basis of his observations, the analytical data and his mechanistical knowledge. 70 different experiments, divided in 7 thematical chapters, are presented. All experiments require the analysis and discussion of 1H and 13C NMR, IR and UV spectra. Additional questions about each reaction have to be answered by the student in his written report. Laboratory safety is garanteed by the exclusion or substitution of hazardous and toxic reagents. Microscale preparations are adopted in most cases to lower the cost of materials and the amount of waste. Recycling of many reaction products as starting materials in other experiments reduces the need for commercial reagents and allows the execution of longer reaction sequences. Only unexpensive standard laboratory equipment and simple glassware are required. All experiments include instructions for the save treatment or disposal of chemical waste.
Resumo:
This work focuses its attention in teaching through problems, as a methodological strategy in the system of chemistry learning situations. The philosophical and epistemological basis of our perspective are the works that were developed by M. Majmutov and M.M. Llantada, in the field of sciences didactics and in the social-historical context of the school, where the fundamental categories that structure teaching through problems are discussed: the problem, the problematic tasks and problematic, as main orientations in the process of construction of the knowledge by the students.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Warning system based on theoretical-experimental study of dispersion of soluble pollutants in rivers
Resumo:
Information about capacity of transport and dispersion of soluble pollutants in natural streams are important in the management of water resources, especially in planning preventive measures to minimize the problems caused by accidental or intentional waste, in public health and economic activities that depend on the use of water. Considering this importance, this study aimed to develop a warning system for rivers, based on experimental techniques using tracers and analytical equations of one-dimensional transport of soluble pollutants conservative, to subsidizing the decision-making in the management of water resources. The system was development in JAVA programming language and MySQL database can predict the travel time of pollutants clouds from a point of eviction and graphically displays the temporal distribution of concentrations of passage clouds, in a particular location, downstream from the point of its launch.
Resumo:
This paper draws on the basic problems related to the determination of parameters to characterize the structural behavior of concretes using Fracture Mechanics concepts. Experimental procedures and results are discussed.
Resumo:
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.
Resumo:
Non-metallic implants made of bioresorbable or biostable synthetic polymers are attractive options in many surgical procedures, ranging from bioresorbable suture anchors of arthroscopic surgery to reconstructive skull implants made of biostable fiber-reinforced composites. Among other benefits, non-metallic implants produce less interference in imaging. Bioresorbable polymer implants may be true multifunctional, serving as osteoconductive scaffolds and as matrices for simultaneous delivery of bone enhancement agents. As a major advantage for loading conditions, mechanical properties of biostable fiber-reinforced composites can be matched with those of the bone. Unsolved problems of these biomaterials are related to the risk of staphylococcal biofilm infections and to the low osteoconductivity of contemporary bioresorbable composite implants. This thesis was focused on the research and development of a multifunctional implant model with enhanced osteoconductivity and low susceptibility to infection. In addition, the experimental models for assessment, diagnostics and prophylaxis of biomaterial-related infections were established. The first experiment (Study I) established an in vitro method for simultaneous evaluation of calcium phosphate and biofilm formation on bisphenol-Aglycidyldimethacrylate and triethylenglycoldimethacrylate (BisGMA-TEGDMA) thermosets with different content of bioactive glass 45S5. The second experiment (Study II) showed no significant difference in osteointegration of nanostructured and microsized polylactide-co-glycolide/β-tricalcium phosphate (PLGA /β-TCP) composites in a minipig model. The third experiment (Study III) demonstrated that positron emission tomography (PET) imaging with the novel 68Ga labelled 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) CD33 related sialic-acid immunoglobulin like lectins (Siglec-9) tracer was able to detect inflammatory response to S. epidermidis and S. aureus peri-implant infections in an intraosseous polytetrafluoroethylene catheter model. In the fourth experiment (Study IV), BisGMATEGDMA thermosets coated with lactose-modified chitosan (Chitlac) and silver nanoparticles exhibited antibacterial activity against S. aureus and P. aeruginosa strains in an in vitro biofilm model and showed in vivo biocompatibility in a minipig model. In the last experiment (Study V), a selective androgen modulator (SARM) released from a poly(lactide)-co-ε-caprolactone (PLCL) polymer matrix failed to produce a dose-dependent enhancement of peri-implant osteogenesis in a bone marrow ablation model.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
This investigation characterized families of adolescents experimenting with psychoactive substances (PAS) consumption. Materials and methods: For this purpose, a qualitative study with a hermeneutical emphasis was conducted among a population of adolescents between the ages of 12 and 17 who have experimented with PAS. Semi-structured interviews were conducted with patients and their families employing a flexible protocol of 14 categories. Results: The findings showed low levels of family cohesion and sense of family identity, inconsistency between educational patterns followed by the parents, as well as deficient parental support. Similarly, the findings indicate significant peer influence during the first stages of consumption of illegal substances. In this regard, the findings suggest that more than providing physical satisfaction, consumption represents a form of acquiring prestige and social position while granting a sensation of psychological, emotional and social well-being. Conclusions: Parental influence was also found considerable in regarding the consumption of legal PAS, like alcohol and tobacco. The study identified as a high-priority need to promote and incorporate communication and conflict resolution skills within the family dynamics by means of prevention and monitoring programs. Those skills and programs would be aimed at providing parents of adolescents experimenting with PAS consumption with new educational tools to orientate new raising guidelines so as to respond appropriately to the problems identified in this study.
Resumo:
Previous research has shown that often there is clear inertia in individual decision making---that is, a tendency for decision makers to choose a status quo option. I conduct a laboratory experiment to investigate two potential determinants of inertia in uncertain environments: (i) regret aversion and (ii) ambiguity-driven indecisiveness. I use a between-subjects design with varying conditions to identify the effects of these two mechanisms on choice behavior. In each condition, participants choose between two simple real gambles, one of which is the status quo option. I find that inertia is quite large and that both mechanisms are equally important.
Resumo:
Resumen basado en el de la publicación
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
The assumption that negligible work is involved in the formation of new surfaces in the machining of ductile metals, is re-examined in the light of both current Finite Element Method (FEM) simulations of cutting and modern ductile fracture mechanics. The work associated with separation criteria in FEM models is shown to be in the kJ/m2 range rather than the few J/m2 of the surface energy (surface tension) employed by Shaw in his pioneering study of 1954 following which consideration of surface work has been omitted from analyses of metal cutting. The much greater values of surface specific work are not surprising in terms of ductile fracture mechanics where kJ/m2 values of fracture toughness are typical of the ductile metals involved in machining studies. This paper shows that when even the simple Ernst–Merchant analysis is generalised to include significant surface work, many of the experimental observations for which traditional ‘plasticity and friction only’ analyses seem to have no quantitative explanation, are now given meaning. In particular, the primary shear plane angle φ becomes material-dependent. The experimental increase of φ up to a saturated level, as the uncut chip thickness is increased, is predicted. The positive intercepts found in plots of cutting force vs. depth of cut, and in plots of force resolved along the primary shear plane vs. area of shear plane, are shown to be measures of the specific surface work. It is demonstrated that neglect of these intercepts in cutting analyses is the reason why anomalously high values of shear yield stress are derived at those very small uncut chip thicknesses at which the so-called size effect becomes evident. The material toughness/strength ratio, combined with the depth of cut to form a non-dimensional parameter, is shown to control ductile cutting mechanics. The toughness/strength ratio of a given material will change with rate, temperature, and thermomechanical treatment and the influence of such changes, together with changes in depth of cut, on the character of machining is discussed. Strength or hardness alone is insufficient to describe machining. The failure of the Ernst–Merchant theory seems less to do with problems of uniqueness and the validity of minimum work, and more to do with the problem not being properly posed. The new analysis compares favourably and consistently with the wide body of experimental results available in the literature. Why considerable progress in the understanding of metal cutting has been achieved without reference to significant surface work is also discussed.
Resumo:
Chain is a commonly used component in offshore moorings where its ruggedness and corrosion resistance make it an attractive choice. Another attractive property is that a straight chain is inherently torque balanced. Having said this, if a chain is loaded in a twisted condition, or twisted when under load, it exhibits highly non-linear torsional behaviour. The consequences of this behaviour can cause handling difficulties or may compromise the integrity of the mooring system, and care must be taken to avoid problems for both the chain and any components to which it is connected. Even with knowledge of the potential problems, there will always be occasions where, despite the utmost care, twist is unavoidable. Thus it is important for the engineer to be able to determine the effects. A frictionless theory has been developed in Part 1 of the paper that may be used to predict the resultant torques and movement or 'lift' in the links as non-dimensional functions of the angle of twist. The present part of the paper describes a series of experiments undertaken on both studless and stud-link chain to allow comparison of this theoretical model with experimental data. Results are presented for the torsional response and link lift for 'constant twist' and 'constant load' type tests on chains of three different link sizes.