934 resultados para Experimental methods
Resumo:
This paper presents a study on the effectiveness of two forms of reinforced grout confining systems for hollow concrete block masonry. The systems considered are: (1) a layer of grout directly confining the unreinforced masonry, and (2) a layer of grout indirectly confining the unreinforced masonry through block shells. The study involves experimental testing and finite-element (FE) modeling of six diagonally loaded masonry panels containing the two confining systems. The failure mode, the ultimate load, and the load-deformation behaviors of the diagonally loaded panels were successfully simulated using the finite-element model. In-plane shear strength and stiffness of the masonry thus determined are used to evaluate some selected models of the confined masonry shear including the strut-and-tie model reported in the literature. The evaluated strut width is compared with the prediction of the FE model and then extended for rational prediction of the strength of confined masonry shear walls.
Resumo:
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, heterogeneity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers near equivalent answers compared with analyses of the full dataset under a controlled error rate. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally, it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Resumo:
BACKGROUND Experimental learning, traditionally conducted in on-campus laboratory venues, is the cornerstone of science and engineering education. In order to ensure that engineering graduates are exposed to ‘real-world’ situations and attain the necessary professional skill-sets, as mandated by course accreditation bodies such as Engineers Australia, face-to-face laboratory experimentation with real equipment has been an integral component of traditional engineering education. The online delivery of engineering coursework endeavours to mimic this with remote and simulated laboratory experimentation. To satisfy student and accreditation requirements, the common practice has been to offer equivalent remote and/or simulated laboratory experiments in lieu of the ones delivered, face-to face, on campus. The current implementations of both remote and simulated laboratories tend to be specified with a focus on technical characteristics, instead of pedagogical requirements. This work attempts to redress this situation by developing a framework for the investigation of the suitability of different experimental educational environments to deliver quality teaching and learning. PURPOSE For the tertiary education sector involved with technical or scientific training, a research framework capable of assessing the affordances of laboratory venues is an important aid during the planning, designing and evaluating stages of face-to-face and online (or cyber) environments that facilitate student experimentation. Providing quality experimental learning venues has been identified as one of the distance-education providers’ greatest challenges. DESIGN/METHOD The investigation draws on the expertise of staff at three Australian universities: Swinburne University of Technology (SUT), Curtin University (Curtin) and Queensland University of Technology (QUT). The aim was to analyse video recorded data, in order to identify the occurrences of kikan-shido (a Japanese term meaning ‘between desks instruction’ and over-the-shoulder learning and teaching (OTST/L) events, thereby ascertaining the pedagogical affordances in face-to-face laboratories. RESULTS These will be disseminated at a Master Class presentation at this conference. DISCUSSION Kikan-shido occurrences did reflect on the affordances of the venue. Unlike with other data collection methods, video recorded data and its analysis is repeatable. Participant bias is minimised or even eradicated and researcher bias tempered by enabling re-coding by others. CONCLUSIONS Framework facilitates the identification of experiential face-to-face learning venue affordances. Investigation will continue with on-line venues.
Resumo:
The effects of tillage practises and the methods of chemical application on atrazine and alachlor losses through run-off were evaluated for five treatments: conservation (untilled) and surface (US), disk and surface, plow and surface, disk and preplant-incorporated, and plow and preplant-incorporated treatments. A rainfall simulator was used to create 63.5 mm h-1 of rainfall for 60 min and 127 mm h-1 for 15 min. Rainfall simulation occurred 24-36 h after chemical application. There was no significant difference in the run-off volume among the treatments but the untilled treatment significantly reduced erosion loss. The untilled treatments had the highest herbicide concentration and the disk treatments were higher than the plow treatments. The surface treatments showed a higher concentration than the incorporated treatments. The concentration of herbicides in the water decreased with time. Among the experimental sites, the one with sandy loam soil produced the greatest losses, both in terms of the run-off volume and herbicide loss. The US treatments had the highest loss and the herbicide incorporation treatments had smaller losses through run-off as the residue cover was effective in preventing herbicide losses. Incorporation might be a favorable method of herbicide application to reduce the herbicide losses by run-off.
Resumo:
Objective: To study the anisotropic mechanical properties of the thoracic aorta in porcine. Methods: Twenty-one porcine thoracic aortas were collected and categorized into three groups. The aortas were then cut through in their axial directions and expanded into two-dimensional planes. Then, by setting the length direction of the planar aortas (i.e., axial directions of the aortas) as 0°, each planar aorta was counterclockwisely cut into 8 samples with orientation of 30°, 45°, 60°, 90°, 120°, 135°, 150° and 180°, respectively. Finally, the uniaxial tensile tests were applied on three groups of samples at the loading rates of 1, 5 and 10 mm/min, respectively, to obtain the elastic modulus and ultimate stress of the aorta in different directions and at different loading rates. Results: The stress-strain curves exhibited different viscoelastic behaviors. With the increase of sample orientations, the elastic modulus gradually increased from 30°, reached the maximum value at 90°, and then gradually decreased till 180°. The variation trend of ultimate stress was similar to that of elastic modulus. Moreover, different loading rates showed a significant influence on the results of elastic modulus and ultimate stress, but a weak influence on the anisotropic degree. Conclusions: The porcine thoracic aorta is highly anisotropic. This research finding provides parameter references for assignment of material properties in finite element modeling, and is significant for understanding biomechanical properties of the arteries.
Resumo:
Conformational preferences of thiocarbonohydrazide (H2NNHCSNHNH2) in its basic and N,N′-diprotonated forms are examined by calculating the barrier to internal rotation around the C---N bonds, using the theoretical LCAO—MO (ab initio and semiempirical CNDO and EHT) methods. The calculated and experimental results are compared with each other and also with values for N,N′-dimethylthiourea which is isoelectronic with thiocarbonohydrazide. The suitability of these methods for studying rotational isomerism seems suspect when lone pair interactions are present.
Resumo:
The widespread and increasing resistance of internal parasites to anthelmintic control is a serious problem for the Australian sheep and wool industry. As part of control programmes, laboratories use the Faecal Egg Count Reduction Test (FECRT) to determine resistance to anthelmintics. It is important to have confidence in the measure of resistance, not only for the producer planning a drenching programme but also for companies investigating the efficacy of their products. The determination of resistance and corresponding confidence limits as given in anthelmintic efficacy guidelines of the Standing Committee on Agriculture (SCA) is based on a number of assumptions. This study evaluated the appropriateness of these assumptions for typical data and compared the effectiveness of the standard FECRT procedure with the effectiveness of alternative procedures. Several sets of historical experimental data from sheep and goats were analysed to determine that a negative binomial distribution was a more appropriate distribution to describe pre-treatment helminth egg counts in faeces than a normal distribution. Simulated egg counts for control animals were generated stochastically from negative binomial distributions and those for treated animals from negative binomial and binomial distributions. Three methods for determining resistance when percent reduction is based on arithmetic means were applied. The first was that advocated in the SCA guidelines, the second similar to the first but basing the variance estimates on negative binomial distributions, and the third using Wadley’s method with the distribution of the response variate assumed negative binomial and a logit link transformation. These were also compared with a fourth method recommended by the International Co-operation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH) programme, in which percent reduction is based on the geometric means. A wide selection of parameters was investigated and for each set 1000 simulations run. Percent reduction and confidence limits were then calculated for the methods, together with the number of times in each set of 1000 simulations the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been said to occur. These simulations provide the basis for setting conditions under which the methods could be recommended. The authors show that given the distribution of helminth egg counts found in Queensland flocks, the method based on arithmetic not geometric means should be used and suggest that resistance be redefined as occurring when the upper level of percent reduction is less than 95%. At least ten animals per group are required in most circumstances, though even 20 may be insufficient where effectiveness of the product is close to the cut off point for defining resistance.
Resumo:
Forested areas play a dominant role in the global hydrological cycle. Evapotranspiration is a dominant component most of the time catching up with the rainfall. Though there are sophisticated methods which are available for its estimation, a simple reliable tool is needed so that a good budgeting could be made. Studies have established that evapotranspiration in forested areas is much higher than in agricultural areas. Latitude, type of forests, climate and geological characteristics also add to the complexity of its estimation. Few studies have compared different methods of evapotranspiration on forested watersheds in semi arid tropical forests. In this paper a comparative study of different methods of estimation of evapotranspiration is made with reference to the actual measurements made using all parameter climatological station data of a small deciduous forested watershed of Mulehole (area of 4.5 km2 ), South India. Potential evapotranspiration (ETo) was calculated using ten physically based and empirical methods. Actual evapotranspiration (AET) has been calculated through computation of water balance through SWAT model. The Penman-Montieth method has been used as a benchmark to compare the estimates arrived at using various methods. The AET calculated shows good agreement with the curve for evapotranspiration for forests worldwide. Error estimates have been made with respect to Penman-Montieth method. This study could give an idea of the errors involved whenever methods with limited data are used and also show the use indirect methods in estimation of Evapotranspiration which is more suitable for regional scale studies.
Resumo:
Between-subject and within-subject variability is ubiquitous in biology and physiology and understanding and dealing with this is one of the biggest challenges in medicine. At the same time it is difficult to investigate this variability by experiments alone. A recent modelling and simulation approach, known as population of models (POM), allows this exploration to take place by building a mathematical model consisting of multiple parameter sets calibrated against experimental data. However, finding such sets within a high-dimensional parameter space of complex electrophysiological models is computationally challenging. By placing the POM approach within a statistical framework, we develop a novel and efficient algorithm based on sequential Monte Carlo (SMC). We compare the SMC approach with Latin hypercube sampling (LHS), a method commonly adopted in the literature for obtaining the POM, in terms of efficiency and output variability in the presence of a drug block through an in-depth investigation via the Beeler-Reuter cardiac electrophysiological model. We show improved efficiency via SMC and that it produces similar responses to LHS when making out-of-sample predictions in the presence of a simulated drug block.
Resumo:
Objective To improve the isolation rate and identification procedures for Haemophilus parasuis from pig tissues. Design Thirteen sampling sites and up to three methods were used to confirm the presence of H. parasuis in pigs after experimental challenge. Procedure Colostrum-deprived, naturally farrowed pigs were challenged intratracheally with H parasuis serovar 12 or 4. Samples taken during necropsy were either inoculated onto culture plates, processed directly for PCR or enriched prior to being processed for PCR. The recovery of H parasuis from different sampling sites and using different sampling methods was compared for each serovar. Results H parasuis was recovered from several sample sites for all serovar 12 challenged pigs, while the trachea was the only positive site for all pigs following serovar 4 challenge. The method of solid medium culture of swabs, and confirmation of the identity of cultured bacteria by PCR, resulted in 38% and 14% more positive results on a site basis for serovars 12 and 4, retrospectively, than direct PCR on the swabs. This difference was significant in the serovar 12 challenge. Conclusion Conventional culture proved to be more effective in detecting H parasuis than direct PCR or PCR on enrichment broths. For subacute (serovar 4) infections, the most successful sites for culture or direct PCR were pleural fluid, peritoneal fibrin and fluid, lung and pericardial fluid. For acute (serovar 12) infections, the best sites were lung, heart blood, affected joints and brain. The methodologies and key sampling sites identified in this study will enable improved isolation of H parasuis and aid the diagnosis of Glässer's disease.
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.
Resumo:
Objective To compare two neck strength training modalities. Background Neck injury in pilots flying high performance aircraft is a concern in aviation medicine. Strength training may be an effective means to strengthen the neck and decrease injury risk. Methods The cohort consisted of 32 age-height-weight matched participants, divided into two experimental groups; the Multi-Cervical Unit (MCU) and Thera-Band tubing groups (THER), and a control (CTRL) group. Ten weeks of training were undertaken and pre-and post isometric strength testing for all groups was performed on the MCU. Comparisons between the three groups were made using a Kruskal-Wallis test and effect sizes between the MCU and the THER groups and the THER and CTRL groups were also calculated. Results The MCU group displayed the greatest increase in isometric strength (flexion 64.4%, extension 62.9%, left lateral flexion 53.3%, right lateral flexion 49.1%) and differences were only statistically significant (p<0.05) when compared to the CTRL group. Increases in neck strength for the THER group were lower than that shown in the MCU group (flexion 42.0%, extension 29.9%, left lateral flexion 26.7%, right lateral flexion 24.1%). Moderate to large effect sizes were found between the MCU and THER as well as the THER and CTRL groups. Conclusions This study demonstrated that the MCU was the most effective training modality to increase isometric cervical muscle strength. Thera-Band tubing did however, produce moderate gains in isometric neck strength
Resumo:
Genetic mark–recapture requires efficient methods of uniquely identifying individuals. 'Shadows' (individuals with the same genotype at the selected loci) become more likely with increasing sample size, and bias harvest rate estimates. Finding loci is costly, but better loci reduce analysis costs and improve power. Optimal microsatellite panels minimize shadows, but panel design is a complex optimization process. locuseater and shadowboxer permit power and cost analysis of this process and automate some aspects, by simulating the entire experiment from panel design to harvest rate estimation.
Resumo:
Promotion of better procedures for releasing undersize fish, advocacy of catch-and-release angling, and changing minimum legal sizes are increasingly being used as tools for sustainable management of fish stocks. However without knowing the proportion of released fish that survive, the conservation value of any of these measures is uncertain. We developed a floating vertical enclosure to estimate short-term survival of released line-caught tropical and subtropical reef-associated species, and used it to compare the effectiveness of two barotrauma-relief procedures (venting and shotline releasing) on red emperor (Lutjanus sebae). Barotrauma signs varied with capture depth, but not with the size of the fish. Fish from the greatest depths (40-52 m) exhibited extreme signs less frequently than did those from intermediate depths (30-40 m), possibly as a result of swim bladder gas being vented externally through a rupture in the body wall. All but two fish survived the experiment, and as neither release technique significantly improved short-term survival of the red emperor over non-treatment we see little benefit in promoting either venting or shotline releasing for this comparatively resilient species. Floating vertical enclosures can improve short-term post-release mortality estimates as they overcome many problems encountered when constraining fish in submerged cages.
Resumo:
The heat capacity of a substance is related to the structure and constitution of the material and its measurement is a standard technique of physical investigation. In this review, the classical methods are first analyzed briefly and their recent extensions are summarized. The merits and demerits of these methods are pointed out. The newer techniques such as the a.c. method, the relaxation method, the pulse methods, the laser flash calorimetry and other methods developed to extend the heat capacity measurements to newer classes of materials and to extreme conditions of sample geometry, pressure and temperature are comprehensively reviewed. Examples of recent work and details of the experimental systems are provided for each method. The introduction of automation in control systems for the monitoring of the experiments and for data processing is also discussed. Two hundred and eight references and 18 figures are used to illustrate the various techniques.