979 resultados para Fast test
Resumo:
Promoted ignition testing [1–3] is used to determine the relative flammability of metal rods in oxygen-enriched atmospheres. In these tests, a promoter is used to ignite each metal rod to start the sample burning. Experiments were performed to better understand the promoted ignition test by obtaining insight into the effect a burning promoter has on the preheating of a test sample. Test samples of several metallic materials were prepared and coupled to fast-responding thermocouples along their length. Various ignition promoters were used to ignite the test samples. The thermocouple measurements and test video were synchronized to determine temperature increase with respect to time and length along each test sample. A recommended length of test sample that must be consumed to be considered a flammable material was determined based on the preheated zone measured from these tests. This length was determined to be 30 mm (1.18 in.). Validation of this length and its rationale are presented.
Resumo:
The following exegesis will detail the key advantages and disadvantages of combining a traditional talk show genre with a linear documentary format using a small production team and a limited budget in a fast turnaround weekly environment. It will deal with the Australian Broadcasting Corporation series Talking Heads, broadcast weekly in the early evening schedule for the network at 18.30 with the presenter Peter Thompson. As Executive Producer for the programme at its inception I was responsible for setting it up for the ABC in Brisbane, a role that included selecting most of the team to work on the series and commissioning the music, titles and all other aspects required to bring the show to the screen. What emerged when producing this generic hybrid will be examined at length, including: „h The talk show/documentary hybrid format needs longer than 26¡¦30¡¨ to be entirely successful. „h The type of presenter ideally suited to the talk show/documentary format requires someone who is genuinely interested in their guests and flexible enough to maintain the format against tangential odds. „h The use of illustrative footage shot in a documentary style narrative improves the talk show format. iv „h The fast turnaround of the talk show/documentary hybrid puts tremendous pressure on the time frames for archive research and copyright clearance and therefore needs to be well-resourced. „h In a fast turnaround talk show/documentary format the field components are advantageous but require very low shooting ratios to be sustainable. „h An intimate set works best for a talk show hybrid like this. Also submitted are two DVDs of recordings of programmes I produced and directed from the first and third series. These are for consideration in the practical component of this project and reflect the changes that I made to the series.
Resumo:
In a much anticipated judgment, the Federal Circuit has sought to clarify the standards applicable in determining whether a claimed method constitutes patent-eligible subject matter. In Bilski, the Federal Circuit identified a test to determine whether a patentee has made claims that pre-empt the use of a fundamental principle or an abstract idea or whether those claims cover only a particular application of a fundamental principle or abstract idea. It held that the sole test for determining subject matter eligibility for a claimed process under § 101 is that: (1) it is tied to a particular machine or apparatus, or (2) it transforms a particular article into a different state or thing. The court termed this the “machine-or-transformation test.” In so doing it overruled its earlier State Street decision to the extent that it deemed its “useful, tangible and concrete result” test as inadequate to determine whether an alleged invention recites patent-eligible subject matter.
Resumo:
In the study of student learning literature, the traditional view holds that when students are faced with heavy workload, poor teaching, and content that they cannot relate to – important aspects of the learning context, they will more likely utilise the surface approach to learning due to stresses, lack of understanding and lack of perceived relevance of the content (Kreber, 2003; Lizzio, Wilson, & Simons, 2002; Ramdsen, 1989; Ramsden, 1992; Trigwell & Prosser, 1991; Vermunt, 2005). For example, in studies involving health and medical sciences students, courses that utilised student-centred, problem-based approaches to teaching and learning were found to elicit a deeper approach to learning than the teacher-centred, transmissive approach (Patel, Groen, & Norman, 1991; Sadlo & Richardson, 2003). It is generally accepted that the line of causation runs from the learning context (or rather students’ self reported data on the learning context) to students’ learning approaches. That is, it is the learning context as revealed by students’ self-reported data that elicit the associated learning behaviour. However, other research studies also found that the same teaching and learning environment can be perceived differently by different students. In a study of students’ perceptions of assessment requirements, Sambell and McDowell (1998) found that students “are active in the reconstruction of the messages and meanings of assessment” (p. 391), and their interpretations are greatly influenced by their past experiences and motivations. In a qualitative study of Hong Kong tertiary students, Kember (2004) found that students using the surface learning approach reported heavier workload than students using the deep learning approach. According to Kember if students learn by extracting meanings from the content and making connections, they will more likely see the higher order intentions embodied in the content and the high cognitive abilities being assessed. On the other hand, if they rote-learn for the graded task, they fail to see the hierarchical relationship in the content and to connect the information. These rote-learners will tend to see the assessment as requiring memorising and regurgitation of a large amount of unconnected knowledge, which explains why they experience a high workload. Kember (2004) thus postulate that it is the learning approach that influences how students perceive workload. Campbell and her colleagues made a similar observation in their interview study of secondary students’ perceptions of teaching in the same classroom (Campbell et al., 2001). The above discussions suggest that students’ learning approaches can influence their perceptions of assessment demands and other aspects of the learning context such as relevance of content and teaching effectiveness. In other words, perceptions of elements in the teaching and learning context are endogenously determined. This study attempted to investigate the causal relationships at the individual level between learning approaches and perceptions of the learning context in economics education. In this study, students’ learning approaches and their perceptions of the learning context were measured. The elements of the learning context investigated include: teaching effectiveness, workload and content. The authors are aware of existence of other elements of the learning context, such as generic skills, goal clarity and career preparation. These aspects, however, were not within the scope of this present study and were therefore not investigated.
Resumo:
This study addresses calls in the literature for the external validation of Western-based marketing concepts and theory in the East. Using DINESERV, the relationships between service quality, overall service quality perceptions, customer satisfaction, and repurchase intentions in the Malaysian fast food industry are examined. A questionnaire was administered to Malaysian fast food consumers at a large university, resulting in findings that support the five-dimensional nature of DINESERV and three of four proposed hypotheses. This study contributes to knowledge of service quality in developing countries and is the first to examine DINESERV in the Malaysian fast food industry.
Resumo:
Objectives: To explore whether people's organ donation consent decisions occur via a reasoned and/or social reaction pathway. --------- Design: We examined prospectively students' and community members' decisions to register consent on a donor register and discuss organ donation wishes with family. --------- Method: Participants completed items assessing theory of planned behaviour (TPB; attitude, subjective norm, perceived behavioural control (PBC)), prototype/willingness model (PWM; donor prototype favourability/similarity, past behaviour), and proposed additional influences (moral norm, self-identity, recipient prototypes) for registering (N=339) and discussing (N=315) intentions/willingness. Participants self-reported their registering (N=177) and discussing (N=166) behaviour 1 month later. The utility of the (1) TPB, (2) PWM, (3) augmented TPB with PWM, and (4) augmented TPB with PWM and extensions was tested using structural equation modelling for registering and discussing intentions/willingness, and logistic regression for behaviour. --------- Results: While the TPB proved a more parsimonious model, fit indices suggested that the other proposed models offered viable options, explaining greater variance in communication intentions/willingness. The TPB, augmented TPB with PWM, and extended augmented TPB with PWM best explained registering and discussing decisions. The proposed and revised PWM also proved an adequate fit for discussing decisions. Respondents with stronger intentions (and PBC for registering) had a higher likelihood of registering and discussing. --------- Conclusions: People's decisions to communicate donation wishes may be better explained via a reasoned pathway (especially for registering); however, discussing involves more reactive elements. The role of moral norm, self-identity, and prototypes as influences predicting communication decisions were highlighted also.
Resumo:
The paper presents a fast and robust stereo object recognition method. The method is currently unable to identify the rotation of objects. This makes it very good at locating spheres which are rotationally independent. Approximate methods for located non-spherical objects have been developed. Fundamental to the method is that the correspondence problem is solved using information about the dimensions of the object being located. This is in contrast to previous stereo object recognition systems where the scene is first reconstructed by point matching techniques. The method is suitable for real-time application on low-power devices.
Resumo:
Purpose: To examine the influence of two different fast-start pacing strategies on performance and oxygen consumption (V˙O2) during cycle ergometer time trials lasting ∼5 min. Methods: Eight trained male cyclists performed four cycle ergometer time trials whereby the total work completed (113 ± 11.5 kJ; mean ± SD) was identical to the better of two 5-min self-paced familiarization trials. During the performance trials, initial power output was manipulated to induce either an all-out or a fast start. Power output during the first 60 s of the fast-start trial was maintained at 471.0 ± 48.0 W, whereas the all-out start approximated a maximal starting effort for the first 15 s (mean power: 753.6 ± 76.5 W) followed by 45 s at a constant power output (376.8 ± 38.5 W). Irrespective of starting strategy, power output was controlled so that participants would complete the first quarter of the trial (28.3 ± 2.9 kJ) in 60 s. Participants performed two trials using each condition, with their fastest time trial compared. Results: Performance time was significantly faster when cyclists adopted the all-out start (4 min 48 s ± 8 s) compared with the fast start (4 min 51 s ± 8 s; P < 0.05). The first-quarter V˙O2 during the all-out start trial (3.4 ± 0.4 L·min-1) was significantly higher than during the fast-start trial (3.1 ± 0.4 L·min-1; P < 0.05). After removal of an outlier, the percentage increase in first-quarter V˙O2 was significantly correlated (r = -0.86, P < 0.05) with the relative difference in finishing time. Conclusions: An all-out start produces superior middle distance cycling performance when compared with a fast start. The improvement in performance may be due to a faster V˙O2 response rather than time saved due to a rapid acceleration.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.