949 resultados para Multiple testing
Resumo:
This study reported on the issues surrounding the acquisition of problem-solving competence of middle-year students who had been ascertained as above average in intelligence, but underachieving in problem-solving competence. In particular, it looked at the possible links between problem-posing skills development and improvements in problem-solving competence. A cohort of Year 7 students at a private, non-denominational, co-educational school was chosen as participants for the study, as they undertook a series of problem-posing sessions each week throughout a school term. The lessons were facilitated by the researcher in the students’ school setting. Two criteria were chosen to identify participants for this study. Firstly, each participant scored above the 60th percentile in the standardized Middle Years Ability Test (MYAT) (Australian Council for Educational Research, 2005) and secondly, the participants all scored below the cohort average for Criterion B (Problem-solving Criterion) in their school mathematics tests during the first semester of Year 7. Two mutually exclusive groups of participants were investigated with one constituting the Comparison Group and the other constituting the Intervention Group. The Comparison Group was chosen from a Year 7 cohort for whom no problem-posing intervention had occurred, while the Intervention Group was chosen from the Year 7 cohort of the following year. This second group received the problem-posing intervention in the form of a teaching experiment. That is, the Comparison Group were only pre-tested and post-tested, while the Intervention Group was involved in the teaching experiment and received the pre-testing and post-testing at the same time of the year, but in the following year, when the Comparison Group have moved on to the secondary part of the school. The groups were chosen from consecutive Year 7 cohorts to avoid cross-contamination of the data. A constructionist framework was adopted for this study that allowed the researcher to gain an “authentic understanding” of the changes that occurred in the development of problem-solving competence of the participants in the context of a classroom setting (Richardson, 1999). Qualitative and quantitative data were collected through a combination of methods including researcher observation and journal writing, video taping, student workbooks, informal student interviews, student surveys, and pre-testing and post-testing. This combination of methods was required to increase the validity of the study’s findings through triangulation of the data. The study findings showed that participation in problem-posing activities can facilitate the re-engagement of disengaged, middle-year mathematics students. In addition, participation in these activities can result in improved problem-solving competence and associated developmental learning changes. Some of the changes that were evident as a result of this study included improvements in self-regulation, increased integration of prior knowledge with new knowledge and increased and contextualised socialisation.
Resumo:
Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies. This paper traces the background of adiabatic compression testing in the oxygen community and discusses the thermodynamic and fluid dynamic processes that occur during rapid pressure surges. This paper is the first of several papers by the authors on the subject of adiabatic compression testing and is presented as a non-comprehensive background and introduction.
Resumo:
Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.
Resumo:
Promoted ignition testing [1–3] is used to determine the relative flammability of metal rods in oxygen-enriched atmospheres. In these tests, a promoter is used to ignite each metal rod to start the sample burning. Experiments were performed to better understand the promoted ignition test by obtaining insight into the effect a burning promoter has on the preheating of a test sample. Test samples of several metallic materials were prepared and coupled to fast-responding thermocouples along their length. Various ignition promoters were used to ignite the test samples. The thermocouple measurements and test video were synchronized to determine temperature increase with respect to time and length along each test sample. A recommended length of test sample that must be consumed to be considered a flammable material was determined based on the preheated zone measured from these tests. This length was determined to be 30 mm (1.18 in.). Validation of this length and its rationale are presented.
Resumo:
Rapid advancements in the field of genetic science have engendered considerable debate, speculation, misinformation and legislative action worldwide. While programs such as the Human Genome Project bring the prospect of seemingly miraculous medical advancements within imminent reach, they also create the potential for significant invasions of traditional areas of privacy and human dignity through laying the potential foundation for new forms of discrimination in insurance, employment and immigration regulation. The insurance industry, which has of course, traditionally been premised on discrimination as part of its underwriting process, is proving to be the frontline of this regulatory battle with extensive legislation, guidelines and debate marking its progress.
Resumo:
Carlin and Finch, this issue, compare goodwill impairment discount rates used by a sample of large Australian firms with ‘independently’ generated discount rates. Their objective is to empirically determine whether managers opportunistically select goodwill discount rates subsequent to the 2005 introduction of International Financial Reporting Standards (IFRS) in Australia. This is a worthwhile objective given that IFRS introduced an impairment regime, and within this regime, discount rate selection plays a key role in goodwill valuation decisions. It is also timely to consider the goodwill valuation issue. Following the recent downturn in the economy, there is a high probability that many firms will be forced to write down impaired goodwill arising from boom period acquisitions. Hence, evidence of bias in rate selection is likely to be of major concern to investors, policymakers and corporate regulators. Carlin and Finch claim their findings provide evidence of such bias. In this commentary I review the validity of their claims.
Resumo:
Assessing the structural health state of urban infrastructure is crucial in terms of infrastructure sustainability. This chapter uses dynamic computer simulation techniques to apply a procedure using vibration-based methods for damage assessment in multiple-girder composite bridges. In addition to changes in natural frequencies, this multi-criteria procedure incorporates two methods, namely, the modal flexibility and the modal strain energy method. Using the numerically simulated modal data obtained through finite element analysis software, algorithms based on modal flexibility and modal strain energy change, before and after damage, are obtained and used as the indices for the assessment of structural health state. The feasibility and capability of the approach is demonstrated through numerical studies of a proposed structure with six damage scenarios. It is concluded that the modal strain energy method is capable of application to multiple-girder composite bridges, as evidenced through the example treated in this chapter.
Resumo:
BACKGROUND:Previous epidemiological investigations of associations between dietary glycemic intake and insulin resistance have used average daily measures of glycemic index (GI) and glycemic load (GL). We explored multiple and novel measures of dietary glycemic intake to determine which was most predictive of an association with insulin resistance.METHODS:Usual dietary intakes were assessed by diet history interview in women aged 42-81 years participating in the Longitudinal Assessment of Ageing in Women. Daily measures of dietary glycemic intake (n = 329) were carbohydrate, GI, GL, and GL per megacalorie (GL/Mcal), while meal based measures (n = 200) were breakfast, lunch and dinner GL; and a new measure, GL peak score, to represent meal peaks. Insulin resistant status was defined as a homeostasis model assessment (HOMA) value of >3.99; HOMA as a continuous variable was also investigated.RESULTS:GL, GL/Mcal, carbohydrate (all P < 0.01), GL peak score (P = 0.04) and lunch GL (P = 0.04) were positively and independently associated with insulin resistant status. Daily measures were more predictive than meal-based measures, with minimal difference between GL/Mcal, GL and carbohydrate. No significant associations were observed with HOMA as a continuous variable.CONCLUSION:A dietary pattern with high peaks of GL above the individual's average intake was a significant independent predictor of insulin resistance in this population, however the contribution was less than daily GL and carbohydrate variables. Accounting for energy intake slightly increased the predictive ability of GL, which is potentially important when examining disease risk in more diverse populations with wider variations in energy requirements.
Resumo:
This study assesses the recently proposed data-driven background dataset refinement technique for speaker verification using alternate SVM feature sets to the GMM supervector features for which it was originally designed. The performance improvements brought about in each trialled SVM configuration demonstrate the versatility of background dataset refinement. This work also extends on the originally proposed technique to exploit support vector coefficients as an impostor suitability metric in the data-driven selection process. Using support vector coefficients improved the performance of the refined datasets in the evaluation of unseen data. Further, attempts are made to exploit the differences in impostor example suitability measures from varying features spaces to provide added robustness.
Resumo:
For some time there has been a growing awareness of organizational culture and its impact on the functioning of engineering and maintenance departments. Those wishing to implement contemporary maintenance regimes (e.g. condition based maintenance) are often encouraged to develop “appropriate cultures” to support a new method’s introduction. Unfortunately these same publications often fail to specifically articulate the cultural values required to support those efforts. In the broader literature, only a limited number of case examples document the cultural values held by engineering asset intensive firms and how they contribute to their success (or failure). Consequently a gap exists in our knowledge of what engineering cultures currently might look like, or what might constitute a best practice engineering asset culture. The findings of a pilot study investigating the perceived ideal characteristics of engineering asset cultures are reported. Engineering managers, consultants and academics (n=47), were surveyed as to what they saw were essential attributes of both engineering cultures and engineering asset personnel. Valued cultural elements included those orientated around continuous improvement, safety and quality. Valued individual attributes included openness to change, interpersonal skills and conscientiousness. The paper concludes with a discussion regarding the development of a best practice cultural framework for practitioners and engineering managers.
Resumo:
When asymptotic series methods are applied in order to solve problems that arise in applied mathematics in the limit that some parameter becomes small, they are unable to demonstrate behaviour that occurs on a scale that is exponentially small compared to the algebraic terms of the asymptotic series. There are many examples of physical systems where behaviour on this scale has important effects and, as such, a range of techniques known as exponential asymptotic techniques were developed that may be used to examinine behaviour on this exponentially small scale. Many problems in applied mathematics may be represented by behaviour within the complex plane, which may subsequently be examined using asymptotic methods. These problems frequently demonstrate behaviour known as Stokes phenomenon, which involves the rapid switches of behaviour on an exponentially small scale in the neighbourhood of some curve known as a Stokes line. Exponential asymptotic techniques have been applied in order to obtain an expression for this exponentially small switching behaviour in the solutions to orginary and partial differential equations. The problem of potential flow over a submerged obstacle has been previously considered in this manner by Chapman & Vanden-Broeck (2006). By representing the problem in the complex plane and applying an exponential asymptotic technique, they were able to detect the switching, and subsequent behaviour, of exponentially small waves on the free surface of the flow in the limit of small Froude number, specifically considering the case of flow over a step with one Stokes line present in the complex plane. We consider an extension of this work to flow configurations with multiple Stokes lines, such as flow over an inclined step, or flow over a bump or trench. The resultant expressions are analysed, and demonstrate interesting implications, such as the presence of exponentially sub-subdominant intermediate waves and the possibility of trapped surface waves for flow over a bump or trench. We then consider the effect of multiple Stokes lines in higher order equations, particu- larly investigating the behaviour of higher-order Stokes lines in the solutions to partial differential equations. These higher-order Stokes lines switch off the ordinary Stokes lines themselves, adding a layer of complexity to the overall Stokes structure of the solution. Specifically, we consider the different approaches taken by Howls et al. (2004) and Chap- man & Mortimer (2005) in applying exponential asymptotic techniques to determine the higher-order Stokes phenomenon behaviour in the solution to a particular partial differ- ential equation.
Resumo:
Managing livestock movement in extensive systems has environmental and production benefits. Currently permanent wire fencing is used to control cattle; this is both expensive and inflexible. Cattle are known to respond to auditory and visual cues and we investigated whether these can be used to manipulate their behaviour. Twenty-five Belmont Red steers with a mean live weight of 270kg were each randomly assigned to one of five treatments. Treatments consisted of a combination of cues (audio, tactile and visual stimuli) and consequence (electrical stimulation). The treatments were electrical stimulation alone, audio plus electrical stimulation, vibration plus electrical stimulation, light plus electrical stimulation and electrified electric fence (6kV) plus electrical stimulation. Cue stimuli were administered for 3s followed immediately by electrical stimulation (consequence) of 1kV for 1s. The experiment tested the operational efficacy of an on-animal control or virtual fencing system. A collar-halter device was designed to carry the electronics, batteries and equipment providing the stimuli, including audio, vibration, light and electrical of a prototype virtual fencing device. Cattle were allowed to travel along a 40m alley to a group of peers and feed while their rate of travel and response to the stimuli were recorded. The prototype virtual fencing system was successful in modifying the behaviour of the cattle. The rate of travel of cattle along the alley demonstrated the large variability in behavioural response associated with tactile, visual and audible cues. The experiment demonstrated virtual fencing has potential for controlling cattle in extensive grazing systems. However, larger numbers of cattle need to be tested to derive a better understanding of the behavioural variance. Further controlled experimental work is also necessary to quantify the interaction between cues, consequences and cattle learning.
Resumo:
The paper proposes a solution for testing of a physical distributed generation system (DGs) along with a computer simulated network. The computer simulated network is referred as the virtual grid in this paper. Integration of DG with the virtual grid provides broad area of testing of power supplying capability and dynamic performance of a DG. It is shown that a DG can supply a part of load power while keeping Point of Common Coupling (PCC) voltage magnitude constant. To represent the actual load, a universal load along with power regenerative capability is designed with the help of voltage source converter (VSC) that mimics the load characteristic. The overall performance of the proposed scheme is verified using computer simulation studies.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.