912 resultados para Rotter I-E test
Resumo:
This work details the results of a face authentication test (FAT2004) (http://www.ee.surrey.ac.uk/banca/icpr2004) held in conjunction with the 17th International Conference on Pattern Recognition. The contest was held on the publicly available BANCA database (http://www.ee.surrey.ac.uk/banca) according to a defined protocol (E. Bailly-Bailliere et al., June 2003). The competition also had a sequestered part in which institutions had to submit their algorithms for independent testing. 13 different verification algorithms from 10 institutions submitted results. Also, a standard set of face recognition software packages from the Internet (http://www.cs.colostate.edu/evalfacerec) were used to provide a baseline performance measure.
Resumo:
While in many travel situations there is an almost limitless range of available destinations, travellers will usually only actively consider two to six in their decision set. One of the greatest challenges facing destination marketers is positioning their destination, against the myriad of competing places that offer similar features, into consumer decision sets. Since positioning requires a narrow focus, marketing communications must present a succinct and meaningful proposition, the selection of which is often problematic for destination marketing organisations (DMO), which deal with a diverse and often eclectic range of attributes in addition to self-interested and demanding stakeholders who have interests in different market segments. This paper reports the application of two qualitative techniques used to explore the range of cognitive attributes, consequences and personal values that represent potential positioning opportunities in the context of short break holidays. The Repertory Test is an effective technique for understanding the salient attributes used by a traveller to differentiate destinations, and Laddering Analysis enables the researcher to explore the smaller set of consequences and personal values guiding such decision making. A key finding of the research was that while individuals might vary in their repertoire of salient attributes, there was a commonality of shared consequences and values. This has important implications for DMOs, since a brand positioning theme that is based on a value will subsume multiple and diverse attributes. It is posited that such a theme will appeal to a broader range of travellers, as well as appease a greater number of destination stakeholders, than would an attribute based theme.
Resumo:
Mechanical damages such as bruising, collision and impact during food processing stages diminish quality and quantity of productions as well as efficiency of operations. Studying mechanical characteristics of food materials will help to enhance current industrial practices. Mechanical properties of fruits and vegetables describe how these materials behave under loading in real industrial operations. Optimizing and designing more efficient equipments require accurate and precise information of tissue behaviours. FE modelling of food industrial processes is an effective method of studying interrelation of variables during mechanical operation. In this study, empirical investigation has been done on mechanical properties of pumpkin peel. The test was a part of FE modelling and simulation of mechanical peeling stage of tough skinned vegetables. The compression test has been conducted on Jap variety of pumpkin. Additionally, stress strain curve, bio-yield and toughness of pumpkin skin have been calculated. The required energy for reaching bio-yield point was 493.75, 507.71 and 451.71 N.mm for 1.25, 10 and 20 mm/min loading speed respectively. Average value of force in bio-yield point for pumpkin peel was 310 N.
Resumo:
Conceptual modeling continues to be an important means for graphically capturing the requirements of an information system. Observations of modeling practice suggest that modelers often use multiple modeling grammars in combination to articulate various aspects of real-world domains. We extend an ontological theory of representation to suggest why and how users employ multiple conceptual modeling grammars in combination. We provide an empirical test of the extended theory using survey data and structured interviews about the use of traditional and structured analysis grammars within an automated tool environment. We find that users of the analyzed tool combine grammars to overcome the ontological incompleteness that exists in each grammar. Users further selected their starting grammar from a predicted subset of grammars only. The qualitative data provides insights as to why some of the predicted deficiencies manifest in practice differently than predicted.
Resumo:
Purpose. To determine how Developmental Eye Movement (DEM) test results relate to reading eye movement patterns recorded with the Visagraph in visually normal children, and whether DEM results and recorded eye movement patterns relate to standardized reading achievement scores. Methods. Fifty-nine school-age children (age = 9.7 ± 0.6 years) completed the DEM test and had eye movements recorded with the Visagraph III test while reading for comprehension. Monocular visual acuity in each eye and random dot stereoacuity were measured and standardized scores on independently administered reading comprehension tests [reading progress test (RPT)] were obtained. Results. Children with slower DEM horizontal and vertical adjusted times tended to have slower reading rates with the Visagraph (r = -0.547 and -0.414 respectively). Although a significant correlation was also found between the DEM ratio and Visagraph reading rate (r = -0.368), the strength of the relationship was less than that between DEM horizontal adjusted time and reading rate. DEM outcome scores were not significantly associated with RPT scores. When the relative contribution of reading ability (RPT) and DEM scores was accounted for in multivariate analysis, DEM outcomes were not significantly associated with Visagraph reading rate. RPT scores were associated with Visagraph outcomes of duration of fixations (r = -0.403) and calculated reading rate (r = 0.366) but not with DEM outcomes. Conclusions.DEM outcomes can identify children whose Visagraph recorded eye movement patterns show slow reading rates. However, when reading ability is accounted for, DEM outcomes are a poor predictor of reading rate. Visagraph outcomes of duration of fixation and reading rate relate to standardized reading achievement scores; however, DEM results do not. Copyright © 2011 American Academy of Optometry.
Resumo:
Background: Queensland men aged 50 years and older are at high risk for melanoma. Early detection via skin self examination (SSE) (particularly whole-body SSE) followed by presentation to a doctor with suspicious lesions, may decrease morbidity and mortality from melanoma. Prevalence of whole-body SSE (wbSSE) is lower in Queensland older men compared to other population subgroups. With the exception of the present study no previous research has investigated the determinants of wbSSE in older men, or interventions to increase the behaviour in this population. Furthermore, although past SSE intervention studies for other populations have cited health behaviour models in the development of interventions, no study has tested these models in full. The Skin Awareness Study: A recent randomised trial, called the Skin Awareness Study, tested the impact of a video-delivered intervention compared to written materials alone on wbSSE in men aged 50 years or older (n=930). Men were recruited from the general population and interviewed over the telephone at baseline and 13 months. The proportion of men who reported wbSSE rose from 10% to 31% in the control group, and from 11% to 36% in the intervention group. Current research: The current research was a secondary analysis of data collected for the Skin Awareness Study. The objectives were as follows: • To describe how men who did not take up any SSE during the study period differed from those who did take up examining their skin. • To determine whether the intervention program was successful in affecting the constructs of the Health Belief Model it was aimed at (self-efficacy, perceived threat, and outcome expectations); and whether this in turn influenced wbSSE. • To determine whether the Health Action Process Approach (HAPA) was a better predictor of wbSSE behaviour compared to the Health Belief Model (HBM). Methods: For objective 1, men who did not report any past SSE at baseline (n=308) were categorised as having ‘taken up SSE’ (reported SSE at study end) or ‘resisted SSE’ (reported no SSE at study end). Bivariate logistic regression, followed by multivariable regression, investigated the association between participant characteristics measured at baseline and resisting SSE. For objective 2 proxy measures of self-efficacy, perceived threat, and outcome expectations were selected. To determine whether these mediated the effect of the intervention on the outcome, a mediator analysis was performed with all participants who completed interviews at both time points (n=830) following the Baron and Kenny approach, modified for use with structural equation modelling (SEM). For objective 3, control group participants only were included (n=410). Proxy measures of all HBM and HAPA constructs were selected and SEM was used to build up models and test the significance of each hypothesised pathway. A likelihood ratio test compared the HAPA to the HBM. Results: Amongst men who did not report any SSE at baseline, 27% did not take up any SSE by the end of the study. In multivariable analyses, resisting SSE was associated with having more freckly skin (p=0.027); being unsure about the statement ‘if I saw something suspicious on my skin, I’d go to the doctor straight away’ (p=0.028); not intending to perform SSE (p=0.015), having lower SSE self-efficacy (p<0.001), and having no recommendation for SSE from a doctor (p=0.002). In the mediator analysis none of the tested variables mediated the relationship between the intervention and wbSSE. In regards to health behaviour models, the HBM did not predict wbSSE well overall. Only the construct of self-efficacy was a significant predictor of future wbSSE (p=0.001), while neither perceived threat (p=0.584) nor outcome expectations (p=0.220) were. By contrast, when the HAPA constructs were added, all three HBM variables predicted intention to perform SSE, which in turn predicted future behaviour (p=0.015). The HAPA construct of volitional self-efficacy was also associated with wbSSE (p=0.046). The HAPA was a significantly better model compared to the HBM (p<0.001). Limitations: Items selected to measure HBM and HAPA model constructs for objectives 2 and 3 may not have accurately reflected each construct. Conclusions: This research added to the evidence base on how best to target interventions to older men; and on the appropriateness of particular health behaviour models to guide interventions. Findings indicate that to overcome resistance those men with more negative pre-existing attitudes to SSE (not intending to do it, lower initial self-efficacy) may need to be targeted with more intensive interventions in the future. Involving general practitioners in recommending SSE to their patients in this population, alongside disseminating an intervention, may increase its success. Comparison of the HBM and HAPA showed that while two of the three HBM variables examined did not directly predict future wbSSE, all three were associated with intention to self-examine skin. This suggests that in this population, intervening on these variables may increase intention to examine skin, but not necessarily the behaviour itself. Future interventions could potentially focus on increasing both the motivational variables of perceived threat and outcome expectations as well as a combination of both action and volitional self-efficacy; with the aim of increasing intention as well as its translation to taking up and maintaining regular wbSSE.
Resumo:
Robust speaker verification on short utterances remains a key consideration when deploying automatic speaker recognition, as many real world applications often have access to only limited duration speech data. This paper explores how the recent technologies focused around total variability modeling behave when training and testing utterance lengths are reduced. Results are presented which provide a comparison of Joint Factor Analysis (JFA) and i-vector based systems including various compensation techniques; Within-Class Covariance Normalization (WCCN), LDA, Scatter Difference Nuisance Attribute Projection (SDNAP) and Gaussian Probabilistic Linear Discriminant Analysis (GPLDA). Speaker verification performance for utterances with as little as 2 sec of data taken from the NIST Speaker Recognition Evaluations are presented to provide a clearer picture of the current performance characteristics of these techniques in short utterance conditions.
Resumo:
We have previously reported that novel vitronectin:growth factor (VN:GF) complexes significantly increase re-epithelialization in a porcine deep dermal partial-thickness burn model. However, the potential exists to further enhance the healing response through combination with an appropriate delivery vehicle which facilitates sustained local release and reduced doses of VN:GF complexes. Hyaluronic acid (HA), an abundant constituent of the interstitium, is known to function as a reservoir for growth factors and other bioactive species. The physicochemical properties of HA confer it with an ability to sustain elevated pericellular concentrations of these species. This has been proposed to arise via HA prolonging interactions of the bioactive species with cell surface receptors and/or protecting them from degradation. In view of this, the potential of HA to facilitate the topical delivery of VN:GF complexes was evaluated. Two-dimensional (2D) monolayer cell cultures and 3D de-epidermised dermis (DED) human skin equivalent (HSE) models were used to test skin cell responses to HA and VN:GF complexes. Our 2D studies revealed that VN:GF complexes and HA stimulate the proliferation of human fibroblasts but not keratinocytes. Experiments in our 3D DED-HSE models showed that VN:GF complexes, both alone and in conjunction with HA, led to enhanced development of both the proliferative and differentiating layers in the DED-HSE models. However, there was no significant difference between the thicknesses of the epidermis treated with VN:GF complexes alone and VN:GF complexes together with HA. While the addition of HA did not enhance all the cellular responses to VN:GF complexes examined, it was not inhibitory, and may confer other advantages related to enhanced absorption and transport that could be beneficial in delivery of the VN:GF complexes to wounds.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of Seven published/submitted papers and one poster presentation, of which five have been published and the other two are under review. This project is financially supported by the QUTPRA Grant. The twenty-first century started with the resurrection of lignocellulosic biomass as a potential substitute for petrochemicals. Petrochemicals, which enjoyed the sustainable economic growth during the past century, have begun to reach or have reached their peak. The world energy situation is complicated by political uncertainty and by the environmental impact associated with petrochemical import and usage. In particular, greenhouse gasses and toxic emissions produced by petrochemicals have been implicated as a significant cause of climate changes. Lignocellulosic biomass (e.g. sugarcane biomass and bagasse), which potentially enjoys a more abundant, widely distributed, and cost-effective resource base, can play an indispensible role in the paradigm transition from fossil-based to carbohydrate-based economy. Poly(3-hydroxybutyrate), PHB has attracted much commercial interest as a plastic and biodegradable material because some its physical properties are similar to those of polypropylene (PP), even though the two polymers have quite different chemical structures. PHB exhibits a high degree of crystallinity, has a high melting point of approximately 180°C, and most importantly, unlike PP, PHB is rapidly biodegradable. Two major factors which currently inhibit the widespread use of PHB are its high cost and poor mechanical properties. The production costs of PHB are significantly higher than for plastics produced from petrochemical resources (e.g. PP costs $US1 kg-1, whereas PHB costs $US8 kg-1), and its stiff and brittle nature makes processing difficult and impedes its ability to handle high impact. Lignin, together with cellulose and hemicellulose, are the three main components of every lignocellulosic biomass. It is a natural polymer occurring in the plant cell wall. Lignin, after cellulose, is the most abundant polymer in nature. It is extracted mainly as a by-product in the pulp and paper industry. Although, traditionally lignin is burnt in industry for energy, it has a lot of value-add properties. Lignin, which to date has not been exploited, is an amorphous polymer with hydrophobic behaviour. These make it a good candidate for blending with PHB and technically, blending can be a viable solution for price and reduction and enhance production properties. Theoretically, lignin and PHB affect the physiochemical properties of each other when they become miscible in a composite. A comprehensive study on structural, thermal, rheological and environmental properties of lignin/PHB blends together with neat lignin and PHB is the targeted scope of this thesis. An introduction to this research, including a description of the research problem, a literature review and an account of the research progress linking the research papers is presented in Chapter 1. In this research, lignin was obtained from bagasse through extraction with sodium hydroxide. A novel two-step pH precipitation procedure was used to recover soda lignin with the purity of 96.3 wt% from the black liquor (i.e. the spent sodium hydroxide solution). The precipitation process is presented in Chapter 2. A sequential solvent extraction process was used to fractionate the soda lignin into three fractions. These fractions, together with the soda lignin, were characterised to determine elemental composition, purity, carbohydrate content, molecular weight, and functional group content. The thermal properties of the lignins were also determined. The results are presented and discussed in Chapter 2. On the basis of the type and quantity of functional groups, attempts were made to identify potential applications for each of the individual lignins. As an addendum to the general section on the development of composite materials of lignin, which includes Chapters 1 and 2, studies on the kinetics of bagasse thermal degradation are presented in Appendix 1. The work showed that distinct stages of mass losses depend on residual sucrose. As the development of value-added products from lignin will improve the economics of cellulosic ethanol, a review on lignin applications, which included lignin/PHB composites, is presented in Appendix 2. Chapters 3, 4 and 5 are dedicated to investigations of the properties of soda lignin/PHB composites. Chapter 3 reports on the thermal stability and miscibility of the blends. Although the addition of soda lignin shifts the onset of PHB decomposition to lower temperatures, the lignin/PHB blends are thermally more stable over a wider temperature range. The results from the thermal study also indicated that blends containing up to 40 wt% soda lignin were miscible. The Tg data for these blends fitted nicely to the Gordon-Taylor and Kwei models. Fourier transform infrared spectroscopy (FT-IR) evaluation showed that the miscibility of the blends was because of specific hydrogen bonding (and similar interactions) between reactive phenolic hydroxyl groups of lignin and the carbonyl group of PHB. The thermophysical and rheological properties of soda lignin/PHB blends are presented in Chapter 4. In this chapter, the kinetics of thermal degradation of the blends is studied using thermogravimetric analysis (TGA). This preliminary investigation is limited to the processing temperature of blend manufacturing. Of significance in the study, is the drop in the apparent energy of activation, Ea from 112 kJmol-1 for pure PHB to half that value for blends. This means that the addition of lignin to PHB reduces the thermal stability of PHB, and that the comparative reduced weight loss observed in the TGA data is associated with the slower rate of lignin degradation in the composite. The Tg of PHB, as well as its melting temperature, melting enthalpy, crystallinity and melting point decrease with increase in lignin content. Results from the rheological investigation showed that at low lignin content (.30 wt%), lignin acts as a plasticiser for PHB, while at high lignin content it acts as a filler. Chapter 5 is dedicated to the environmental study of soda lignin/PHB blends. The biodegradability of lignin/PHB blends is compared to that of PHB using the standard soil burial test. To obtain acceptable biodegradation data, samples were buried for 12 months under controlled conditions. Gravimetric analysis, TGA, optical microscopy, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), FT-IR, and X-ray photoelectron spectroscopy (XPS) were used in the study. The results clearly demonstrated that lignin retards the biodegradation of PHB, and that the miscible blends were more resistant to degradation compared to the immiscible blends. To obtain an understanding between the structure of lignin and the properties of the blends, a methanol-soluble lignin, which contains 3× less phenolic hydroxyl group that its parent soda lignin used in preparing blends for the work reported in Chapters 3 and 4, was blended with PHB and the properties of the blends investigated. The results are reported in Chapter 6. At up to 40 wt% methanolsoluble lignin, the experimental data fitted the Gordon-Taylor and Kwei models, similar to the results obtained soda lignin-based blends. However, the values obtained for the interactive parameters for the methanol-soluble lignin blends were slightly lower than the blends obtained with soda lignin indicating weaker association between methanol-soluble lignin and PHB. FT-IR data confirmed that hydrogen bonding is the main interactive force between the reactive functional groups of lignin and the carbonyl group of PHB. In summary, the structural differences existing between the two lignins did not manifest itself in the properties of their blends.
Resumo:
Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.
Resumo:
Evidence that our food environment can affect meal size is often taken to indicate a failure of ‘conscious control’. By contrast, our research suggests that ‘expected satiation’ (fullness that a food is expected to confer) predicts self-selected meal size. However, the role of meal planning as a determinant of actual meal size remains unresolved, as does the extent to which meal planning is commonplace outside the laboratory. Here, we quantified meal planning and its relation to meal size in a large-cohort study. Participants (N= 764; 25.6 yrs, 78% female) completed a questionnaire containing items relating to their last meal. The majority (91%) of meals were consumed in their entirety. Furthermore, in 92% of these cases the participants decided to consume the whole meal, even before it began. A second major objective was to explore the prospect that meal plans are revised based on within-meal experience (e.g., development of satiation). Only 8% of participants reported ‘unexpected’ satiation that caused them to consume less than anticipated. Moreover, at the end of the meal 57% indicated that they were not fully satiated, and 29% continued eating beyond comfortable satiation (often to avoid wasting food). This pattern was neither moderated by BMI nor dieting status, and was observed across meal types. Together, these data indicate that meals are often planned and that planning corresponds closely with amount consumed. By contrast, we find limited evidence for within-meal modification of these plans, suggesting that ‘pre-meal cognition’ is an important determinant of meal size in humans.
Resumo:
In this paper I examine the recent arguments by Charles Foster, Jonathan Herring, Karen Melham and Tony Hope against the utility of the doctrine of double effect. One basis on which they reject the utility of the doctrine is their claim that it is notoriously difficult to apply what they identify as its 'core' component, namely, the distinction between intention and foresight. It is this contention that is the primarily focus of my article. I argue against this claim that the intention/foresight distinction remains a fundamental part of the law in those jurisdictions where intention remains an element of the offence of murder and that, accordingly, it is essential ro resolve the putative difficulties of applying the intention/foresight distinction so as to ensure the integrity of the law of murder. I argue that the main reasons advanced for the claim that the intention/foresight distinction is difficult to apply are ultimately unsustainable, and that the distinction is not as difficult to apply as the authors suggest.