970 resultados para MIGRAINE WITH AURA
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
This paper is aimed at investigating the effect of web openings on the plastic bending behaviour and section moment capacity of a new cold-formed steel beam known as LiteSteel beam (LSB) using numerical modelling. Different LSB sections with varying circular hole diameter and spacing were considered. A simplified but appropriate numerical modelling technique was developed for the modelling of monosymmetric sections such as LSBs subject to bending, and was used to simulate a series of section moment capacity tests of LSB flexural members with web openings. The buckling and ultimate strength behaviour was investigated in detail and the modeling technique was further improved through a comparison of numerical and experimental results. This paper describes the simplified finite element modeling technique used in this study that includes all the significant behavioural effects affecting the plastic bending behaviour and section moment capacity of LSB sections with web holes. Numerical and test results and associated findings are also presented.
Resumo:
The new cold-formed LiteSteel beam (LSB) sections have found increasing popularity in residential, industrial and commercial buildings due to their lightweight and cost-effectiveness. They have the beneficial characteristics of including torsionally rigid rectangular flanges combined with economical fabrication processes. Currently there is significant interest in using LSB sections as flexural members in floor joist systems. When used as floor joists, the LSB sections require holes in the web to provide access for inspection and various services. But there are no design methods that provide accurate predictions of the moment capacities of LSBs with web holes. In this study, the buckling and ultimate strength behaviour of LSB flexural members with web holes was investigated in detail by using a detailed parametric study based on finite element analyses with an aim to develop appropriate design rules and recommendations for the safe design of LSB floor joists. Moment capacity curves were obtained using finite element analyses including all the significant behavioural effects affecting their ultimate member capacity. The parametric study produced the required moment capacity curves of LSB section with a range of web hole combinations and spans. A suitable design method for predicting the ultimate moment capacity of LSB with web holes was finally developed. This paper presents the details of this investigation and the results
Resumo:
In children, joint hypermobility (typified by structural instability of joints) manifests clinically as neuro-muscular and musculo-skeletal conditions and conditions associated with development and organization of control of posture and gait (Finkelstein, 1916; Jahss, 1919; Sobel, 1926; Larsson, Mudholkar, Baum and Srivastava, 1995; Murray and Woo, 2001; Hakim and Grahame, 2003; Adib, Davies, Grahame, Woo and Murray, 2005:). The process of control of the relative proportions of joint mobility and stability, whilst maintaining equilibrium in standing posture and gait, is dependent upon the complex interrelationship between skeletal, muscular and neurological function (Massion, 1998; Gurfinkel, Ivanenko, Levik and Babakova, 1995; Shumway-Cook and Woollacott, 1995). The efficiency of this relies upon the integrity of neuro-muscular and musculo-skeletal components (ligaments, muscles, nerves), and the Central Nervous System’s capacity to interpret, process and integrate sensory information from visual, vestibular and proprioceptive sources (Crotts, Thompson, Nahom, Ryan and Newton, 1996; Riemann, Guskiewicz and Shields, 1999; Schmitz and Arnold, 1998) and development and incorporation of this into a representational scheme (postural reference frame) of body orientation with respect to internal and external environments (Gurfinkel et al., 1995; Roll and Roll, 1988). Sensory information from the base of support (feet) makes significant contribution to the development of reference frameworks (Kavounoudias, Roll and Roll, 1998). Problems with the structure and/ or function of any one, or combination of these components or systems, may result in partial loss of equilibrium and, therefore ineffectiveness or significant reduction in the capacity to interact with the environment, which may result in disability and/ or injury (Crotts et al., 1996; Rozzi, Lephart, Sterner and Kuligowski, 1999b). Whilst literature focusing upon clinical associations between joint hypermobility and conditions requiring therapeutic intervention has been abundant (Crego and Ford, 1952; Powell and Cantab, 1983; Dockery, in Jay, 1999; Grahame, 1971; Childs, 1986; Barton, Bird, Lindsay, Newton and Wright, 1995a; Rozzi, et al., 1999b; Kerr, Macmillan, Uttley and Luqmani, 2000; Grahame, 2001), there has been a deficit in controlled studies in which the neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility have been quantified and considered within the context of organization of postural control in standing balance and gait. This was the aim of this project, undertaken as three studies. The major study (Study One) compared the fundamental neuro-muscular and musculo-skeletal characteristics of 15 children with joint hypermobility, and 15 age (8 and 9 years), gender, height and weight matched non-hypermobile controls. Significant differences were identified between previously undiagnosed hypermobile (n=15) and non-hypermobile children (n=15) in passive joint ranges of motion of the lower limbs and lumbar spine, muscle tone of the lower leg and foot, barefoot CoP displacement and in parameters of barefoot gait. Clinically relevant differences were also noted in barefoot single leg balance time. There were no differences between groups in isometric muscle strength in ankle dorsiflexion, knee flexion or extension. The second comparative study investigated foot morphology in non-weight bearing and weight bearing load conditions of the same children with and without joint hypermobility using three dimensional images (plaster casts) of their feet. The preliminary phase of this study evaluated the casting technique against direct measures of foot length, forefoot width, RCSP and forefoot to rearfoot angle. Results indicated accurate representation of elementary foot morphology within the plaster images. The comparative study examined the between and within group differences in measures of foot length and width, and in measures above the support surface (heel inclination angle, forefoot to rearfoot angle, normalized arch height, height of the widest point of the heel) in the two load conditions. Results of measures from plaster images identified that hypermobile children have different barefoot weight bearing foot morphology above the support surface than non-hypermobile children, despite no differences in measures of foot length or width. Based upon the differences in components of control of posture and gait in the hypermobile group, identified in Study One and Study Two, the final study (Study Three), using the same subjects, tested the immediate effect of specifically designed custom-made foot orthoses upon balance and gait of hypermobile children. The design of the orthoses was evaluated against the direct measures and the measures from plaster images of the feet. This ascertained the differences in morphology of the modified casts used to mould the orthoses and the original image of the foot. The orthoses were fitted into standardized running shoes. The effect of the shoe alone was tested upon the non-hypermobile children as the non-therapeutic equivalent condition. Immediate improvement in balance was noted in single leg stance and CoP displacement in the hypermobile group together with significant immediate improvement in the percentage of gait phases and in the percentage of the gait cycle at which maximum plantar flexion of the ankle occurred in gait. The neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility are different from those of non-hypermobile children. The Beighton, Solomon and Soskolne (1973) screening criteria successfully classified joint hypermobility in children. As a result of this study joint hypermobility has been identified as a variable which must be controlled in studies of foot morphology and function in children. The outcomes of this study provide a basis upon which to further explore the association between joint hypermobility and neuro-muscular and musculo-skeletal conditions, and, have relevance for the physical education of children with joint hypermobility, for footwear and orthotic design processes, and, in particular, for clinical identification and treatment of children with joint hypermobility.
Resumo:
With globalisation and severe budget constraints in the education sector in Australia and around the world it has become necessary for higher education institutions to be more outward looking and seek funding from non traditional sources to supplement the financial shortfalls. One way to overcome this problem is to work cooperatively with other institutions to share facilities and courses, at the same time generating valuable income to maintain the operation of the university. This paper describes the development of joint curricula in built environment and engineering courses in QUT. It outlines the stages of development starting from seeking international partners, developing memorandum of understanding, making visit to partner institution to inspect the facilities, curriculum development to meet the academic requirements of the institutions and professional bodies and finally the implementation process.
Corneal topography with Scheimpflug imaging and videokeratography : comparative study of normal eyes
Resumo:
PURPOSE: To compare the repeatability within anterior corneal topography measurements and agreement between measurements with the Pentacam HR rotating Scheimpflug camera and with a previously validated Placido disk–based videokeratoscope (Medmont E300). ------ SETTING: Contact Lens and Visual Optics Laboratory, School of Optometry, Queensland University of Technology, Brisbane, Queensland, Australia. ----- METHODS: Normal eyes in 101 young adult subjects had corneal topography measured using the Scheimpflug camera (6 repeated measurements) and videokeratoscope (4 repeated measurements). The best-fitting axial power corneal spherocylinder was calculated and converted into power vectors. Corneal higher-order aberrations (HOAs) (up to the 8th Zernike order) were calculated using the corneal elevation data from each instrument. ----- RESULTS: Both instruments showed excellent repeatability for axial power spherocylinder measurements (repeatability coefficients <0.25 diopter; intraclass correlation coefficients >0.9) and good agreement for all power vectors. Agreement between the 2 instruments was closest when the mean of multiple measurements was used in analysis. For corneal HOAs, both instruments showed reasonable repeatability for most aberration terms and good correlation and agreement for many aberrations (eg, spherical aberration, coma, higher-order root mean square). For other aberrations (eg, trefoil and tetrafoil), the 2 instruments showed relatively poor agreement. ----- CONCLUSIONS: For normal corneas, the Scheimpflug system showed excellent repeatability and reasonable agreement with a previously validated videokeratoscope for the anterior corneal axial curvature best-fitting spherocylinder and several corneal HOAs. However, for certain aberrations with higher azimuthal frequencies, the Scheimpflug system had poor agreement with the videokeratoscope; thus, caution should be used when interpreting these corneal aberrations with the Scheimpflug system.
Resumo:
Background: There are innumerable diabetes studies that have investigated associations between risk factors, protective factors, and health outcomes; however, these individual predictors are part of a complex network of interacting forces. Moreover, there is little awareness about resilience or its importance in chronic disease in adulthood, especially diabetes. Thus, this is the first study to: (1) extensively investigate the relationships among a host of predictors and multiple adaptive outcomes; and (2) conceptualise a resilience model among people with diabetes. Methods: This cross-sectional study was divided into two research studies. Study One was to translate two diabetes-specific instruments (Problem Areas In Diabetes, PAID; Diabetes Coping Measure, DCM) into a Chinese version and to examine their psychometric properties for use in Study Two in a convenience sample of 205 outpatients with type 2 diabetes. In Study Two, an integrated theoretical model is developed and evaluated using the structural equation modelling (SEM) technique. A self-administered questionnaire was completed by 345 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Results: Confirmatory factor analyses confirmed a one-factor structure of the PAID-C which was similar to the original version of the PAID. Strong content validity of the PAID-C was demonstrated. The PAID-C was associated with HbA1c and diabetes self-care behaviours, confirming satisfactory criterion validity. There was a moderate relationship between the PAID-C and the Perceived Stress Scale, supporting satisfactory convergent validity. The PAID-C also demonstrated satisfactory stability and high internal consistency. A four-factor structure and strong content validity of the DCM-C was confirmed. Criterion validity demonstrated that the DCM-C was significantly associated with HbA1c and diabetes self-care behaviours. There was a statistical correlation between the DCM-C and the Revised Ways of Coping Checklist, suggesting satisfactory convergent validity. Test-retest reliability demonstrated satisfactory stability of the DCM-C. The total scale of the DCM-C showed adequate internal consistency. Age, duration of diabetes, diabetes symptoms, diabetes distress, physical activity, coping strategies, and social support were the most consistent factors associated with adaptive outcomes in adults with diabetes. Resilience was positively associated with coping strategies, social support, health-related quality of life, and diabetes self-care behaviours. Results of the structural equation modelling revealed protective factors had a significant direct effect on adaptive outcomes; however, the construct of risk factors was not significantly related to adaptive outcomes. Moreover, resilience can moderate the relationships among protective factors and adaptive outcomes, but there were no interaction effects of risk factors and resilience on adaptive outcomes. Conclusion: This study contributes to an understanding of how risk factors and protective factors work together to influence adaptive outcomes in blood sugar control, health-related quality of life, and diabetes self-care behaviours. Additionally, resilience is a positive personality characteristic and may be importantly involved in the adjustment process among people living with type 2 diabetes.
Resumo:
The missing-item format and interrupted behaviour chain strategy have been used to increase spontaneous requests among children with developmental disabilities, but their relative effectiveness has not been compared. The present study compared the extent to which each strategy evoked spontaneous requests and challenging behaviour in three children with autism. Sessions where a needed item was withheld (missing-item format) were compared to sessions involving the removal of a needed item (interrupted behaviour chain strategy). Comparisons were conducted across three activates in an alternating treatments design. Both strategies evoked spontaneous requests with no significant difference in effectiveness. Few differences were obtained in the amount of challenging behaviour evoked but the two conditions, although a moderate inverse relationship between spontaneous requesting and challenging behaviour was observed. The results suggest that theses two procedures yield similar outcomes. Concurrent use of both strategies may enable teachers to create a greater number of opportunities for requesting.
Resumo:
Objectives. To evaluate the performance of the dynamic-area high-speed videokeratoscopy technique in the assessment of tear film surface quality with and without the presence of soft contact lenses on eye. Methods. Retrospective data from a tear film study using basic high-speed videokeratoscopy, captured at 25 frames per second, (Kopf et al., 2008, J Optom) were used. Eleven subjects had tear film analysis conducted in the morning, midday and evening on the first and seventh day of one week of no lens wear. Five of the eleven subjects then completed an extra week of hydrogel lens wear followed by a week of silicone hydrogel lens wear. Analysis was performed on a 6 second period of the inter-blink recording. The dynamic-area high-speed videokeratoscopy technique uses the maximum available area of Placido ring pattern reflected from the tear interface and eliminates regions of disturbance due to shadows from the eyelashes. A value of tear film surface quality was derived using image rocessing techniques, based on the quality of the reflected ring pattern orientation. Results. The group mean tear film surface quality and the standard deviations for each of the conditions (bare eye, hydrogel lens, and silicone hydrogel lens) showed a much lower coefficient of variation than previous methods (average reduction of about 92%). Bare eye measurements from the right and left eyes of eleven individuals showed high correlation values (Pearson’s correlation r = 0.73, p < 0.05). Repeated measures ANOVA across the 6 second period of measurement in the normal inter-blink period for the bare eye condition showed no statistically significant changes. However, across the 6 second inter-blink period with both contact lenses, statistically significant changes were observed (p < 0.001) for both types of contact lens material. Overall, wearing hydrogel and silicone hydrogel lenses caused the tear film surface quality to worsen compared with the bare eye condition (repeated measures ANOVA, p < 0.0001 for both hydrogel and silicone hydrogel). Conclusions. The results suggest that the dynamic-area method of high-speed videokeratoscopy was able to distinguish and quantify the subtle, but systematic worsening of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions.
Resumo:
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
Resumo:
Substance misuse in individuals with schizophrenia is very common, especially in young men, in communities where use is frequent and in people receiving inpatient treatment. Problematic use occurs at very low intake levels, so that most affected people are not physically dependent (with the exception of nicotine). People with schizophrenia and substance misuse have poorer symptomatic and functional outcomes than those with schizophrenia alone. Unless there is routine screening, substance misuse is often missed in assessments. Service systems tend to be separated, with poor inter-communication, and affected patients are often excluded from services because of their comorbidity. However, effective management of these disorders requires a fully integrated approach because of the close inter-relationship of the disorders. Use of atypical antipsychotics may be especially important in this population because of growing evidence (especially on clozapine and risperidone) that nicotine smoking, alcohol misuse and possibly some other substance misuse is reduced. Several pharmacotherapies for substance misuse can be used safely in people with schizophrenia, but the evidence base is small and guidelines for their use are necessarily derived from experience in the general population.
Resumo:
Background: Diets with a high postprandial glycemic response may contribute to long-term development of insulin resistance and diabetes, however previous epidemiological studies are conflicting on whether glycemic index (GI) or glycemic load (GL) are dietary factors associated with the progression. Our objectives were to estimate GI and GL in a group of older women, and evaluate cross-sectional associations with insulin resistance. Subjects and Methods: Subjects were 329 Australian women aged 42-81 years participating in year three of the Longitudinal Assessment of Ageing in Women (LAW). Dietary intakes were assessed by diet history interviews and analysed using a customised GI database. Insulin resistance was defined as a homeostasis model assessment (HOMA) value of >3.99, based on fasting blood glucose and insulin concentrations. Results: GL was significantly higher in the 26 subjects who were classified as insulin resistant compared to subjects who were not (134±33 versus 114±24, P<0.001). In a logistic regression model, an increment of 15 GL units increased the odds of insulin resistance by 2.09 (95%CI 1.55, 2.80, P<0.001) independently of potential confounding variables. No significant associations were found when insulin resistance was assessed as a continuous variable. Conclusions: Results of this cross-sectional study support the concept that diets with a higher GL are associated with increased risk of insulin resistance. Further studies are required to investigate whether reducing glycemic intake, by either consuming lower GI foods and/or smaller serves of carbohydrate, can contribute to a reduction in development of insulin resistance and long-term risk of type 2 diabetes.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
A series of mobile phone prototypes called The Swarm have been developed in response to the user needs identified in a three-year empirical study of young people’s use of mobile phones. The prototypes take cues from user led innovation and provide multiple avatars that allow individuals to define and manage their own virtual identity. This paper briefly maps the evolution of the prototypes and then describes how the pre-defined, color coded avatars in the latest version are being given greater context and personalization through the use of digital images. This not only gives ‘serendipity a nudge’ by allowing groups to come together more easily, it provides contextual information that can reduce gratuitous contact.
Resumo:
The value of soil evidence in the forensic discipline is well known. However, it would be advantageous if an in-situ method was available that could record responses from tyre or shoe impressions in ground soil at the crime scene. The development of optical fibres and emerging portable NIR instruments has unveiled a potential methodology which could permit such a proposal. The NIR spectral region contains rich chemical information in the form of overtone and combination bands of the fundamental infrared absorptions and low-energy electronic transitions. This region has in the past, been perceived as being too complex for interpretation and consequently was scarcely utilized. The application of NIR in the forensic discipline is virtually non-existent creating a vacancy for research in this area. NIR spectroscopy has great potential in the forensic discipline as it is simple, nondestructive and capable of rapidly providing information relating to chemical composition. The objective of this study is to investigate the ability of NIR spectroscopy combined with Chemometrics to discriminate between individual soils. A further objective is to apply the NIR process to a simulated forensic scenario where soil transfer occurs. NIR spectra were recorded from twenty-seven soils sampled from the Logan region in South-East Queensland, Australia. A series of three high quartz soils were mixed with three different kaolinites in varying ratios and NIR spectra collected. Spectra were also collected from six soils as the temperature of the soils was ramped from room temperature up to 6000C. Finally, a forensic scenario was simulated where the transferral of ground soil to shoe soles was investigated. Chemometrics methods such as the commonly known Principal Component Analysis (PCA), the less well known fuzzy clustering (FC) and ranking by means of multicriteria decision making (MCDM) methodology were employed to interpret the spectral results. All soils were characterised using Inductively Coupled Plasma Optical Emission Spectroscopy and X-Ray Diffractometry. Results were promising revealing NIR combined with Chemometrics is capable of discriminating between the various soils. Peak assignments were established by comparing the spectra of known minerals with the spectra collected from the soil samples. The temperature dependent NIR analysis confirmed the assignments of the absorptions due to adsorbed and molecular bound water. The relative intensities of the identified NIR absorptions reflected the quantitative XRD and ICP characterisation results. PCA and FC analysis of the raw soils in the initial NIR investigation revealed that the soils were primarily distinguished on the basis of their relative quartz and kaolinte contents, and to a lesser extent on the horizon from which they originated. Furthermore, PCA could distinguish between the three kaolinites used in the study, suggesting that the NIR spectral region was sensitive enough to contain information describing variation within kaolinite itself. The forensic scenario simulation PCA successfully discriminated between the ‘Backyard Soil’ and ‘Melcann® Sand’, as well as the two sampling methods employed. Further PCA exploration revealed that it was possible to distinguish between the various shoes used in the simulation. In addition, it was possible to establish association between specific sampling sites on the shoe with the corresponding site remaining in the impression. The forensic application revealed some limitations of the process relating to moisture content and homogeneity of the soil. These limitations can both be overcome by simple sampling practices and maintaining the original integrity of the soil. The results from the forensic scenario simulation proved that the concept shows great promise in the forensic discipline.