89 resultados para Free Cash Flow to Equity
Resumo:
Organizations adopt a Supply Chain Management System (SCMS) expecting benefits to the organization and its functions. However, organizations are facing mounting challenges to realizing benefits through SCMS. Studies suggest a growing dissatisfaction among client organizations due to an increasing gap between expectations and realization of SCMS benefits. Further, reflecting the Enterprise System studies such as Seddon et al. (2010), SCMS benefits are also expected to flow to the organization throughout its lifecycle rather than being realized all at once. This research therefore proposes to derive a lifecycle-wide understanding of SCMS benefits and realization to derive a benefit expectation management framework to attain the full potential of an SCMS. The primary research question of this study is: How can client organizations better manage their benefit expectations of SCM systems? The specific research goals of the current study include: (1) to better understand the misalignment of received and expected benefits of SCM systems; (2) to identify the key factors influencing SCM system expectations and to develop a framework to manage SCMS benefits; (3) to explore how organizational satisfaction is influenced by the lack of SCMS benefit confirmation; and (4) to explore how to improve the realization of SCM system benefits. Expectation-Confirmation Theory (ECT) provides the theoretical underpinning for this study. ECT has been widely used in the consumer behavior literature to study customer satisfaction, post-purchase behavior and service marketing in general. Recently, ECT has been extended into Information Systems (IS) research focusing on individual user satisfaction and IS continuance. However, only a handful of studies have employed ECT to study organizational satisfaction on large-scale IS. The current study will enrich the research stream by extending ECT into organizational-level analysis and verifying the preliminary findings of relevant works by Staples et al. (2002), Nevo and Chan (2007) and Nevo and Wade (2007). Moreover, this study will go further trying to operationalize the constructs of ECT into the context of SCMS. The empirical findings of the study commence with a content analysis, through which 41 vendor reports and academic reports are analyzed yielding sixty expected benefits of SCMS. Then, the expected benefits are compared with the benefits realized at a case organization in the Fast Moving Consumer Goods industry sector that had implemented a SAP Supply Chain Management System seven years earlier. The study develops an SCMS Benefit Expectation Management (SCMS-BEM) Framework. The comparison of benefit expectations and confirmations highlights that, while certain benefits are realized earlier in the lifecycle, other benefits could take almost a decade to realize. Further analysis and discussion on how the developed SCMS-BEM Framework influences ECT when applied in SCMS was also conducted. It is recommended that when establishing their expectations of the SCMS, clients should remember that confirmation of these expectations will have a long lifecycle, as shown in the different time periods in the SCMS-BEM Framework. Moreover, the SCMS-BEM Framework will allow organizations to maintain high levels of satisfaction through careful mitigation and confirming expectations based on the lifecycle phase. In addition, the study reveals that different stakeholder groups have different expectations of the same SCMS. The perspective of multiple stakeholders has significant implications for the application of ECT in the SCMS context. When forming expectations of the SCMS, the collection of organizational benefits of SCMS should represent the perceptions of all stakeholder groups. The same mechanism should be employed in the measurements of received SCMS benefits. Moreover, for SCMS, there exists interdependence of the satisfaction among the various stakeholders. The satisfaction of decision-makers or the authorized staff is not only driven by their own expectation confirmation level, it is also influenced by the confirmation level of other stakeholders‘ expectations in the organization. Satisfaction from any one particular stakeholder group can not reflect the true satisfaction of the client organization. Furthermore, it is inferred from the SCMS-BEM Framework that organizations should place emphasis on the viewpoints of the operational and management staff when evaluating the benefits of SCMS in the short and middle term. At the same time, organizations should be placing more attention on the perspectives of strategic staff when evaluating the performance of the SCMS in the long term.
Resumo:
Objective: To describe the reported impact of Pandemic (H1N1) 2009 on EDs, so as to inform future pandemic policy, planning and response management. Methods: This study comprised an issue and theme analysis of publicly accessible literature, data from jurisdictional health departments, and data obtained from two electronic surveys of ED directors and ED staff. The issues identified formed the basis of policy analysis and evaluation. Results: Pandemic (H1N1) 2009 had a significant impact on EDs with presentation for patients with ‘influenza-like illness’ up to three times that of the same time in previous years. Staff reported a range of issues, including poor awareness of pandemic plans, patient and family aggression, chaotic information flow to themselves and the public, heightened stress related to increased workloads and lower levels of staffing due to illness, family care duties and redeployment of staff to flu clinics. Staff identified considerable discomfort associated with prolonged times wearing personal protective equipment. Staff believed that the care of non-flu patients was compromised during the pandemic as a result of overwork, distraction from core business and the difficulties associated with accommodating infectious patients in an environment that was not conducive. Conclusions: This paper describes the breadth of the impact of pandemics on ED operations. It identifies a need to address a range of industrial, management and procedural issues. In particular, there is a need for a single authoritative source of information, the re-engineering of EDs to accommodate infectious patients and organizational changes to enable rapid deployment of alternative sources of care.
Resumo:
One of the fundamental econometric models in finance is predictive regression. The standard least squares method produces biased coefficient estimates when the regressor is persistent and its innovations are correlated with those of the dependent variable. This article proposes a general and convenient method based on the jackknife technique to tackle the estimation problem. The proposed method reduces the bias for both single- and multiple-regressor models and for both short- and long-horizon regressions. The effectiveness of the proposed method is demonstrated by simulations. An empirical application to equity premium prediction using the dividend yield and the short rate highlights the differences between the results by the standard approach and those by the bias-reduced estimator. The significant predictive variables under the ordinary least squares become insignificant after adjusting for the finite-sample bias. These discrepancies suggest that bias reduction in predictive regressions is important in practical applications.
Resumo:
With saturation within domestic marketplaces and increased growth opportunities overseas, many financial service providers are investing in foreign markets. However, cultural attitudes towards money can present market entry challenges to financial service providers. The industry would therefore benefit from a strategic model that helps to align financial marketing mixes with the cultural dimensions of a foreign market. The Financial Services Cultural Orientation (FSCO) Matrix has therefore been designed, with three cultural dimensions identified which influence preference for financial products; preference for cash, aversion to debt and savings orientation. Based on a combination of these dimensions and their relative strength within a culture, eight different consumer segments for financial products are identified, and marketing strategies for each consumer segment are then proposed. Three cultural clusters from the GLOBE Project House et al. (2002) are used to highlight possible geographic markets for each of these consumer segments. In particular, this paper focuses on GLOBE’s Confucian Asia, Southern Asia and Anglo cultural clusters, as these clusters represent the most well established financial markets in the world and the fastest growing financial markets for the future. The FSCO Matrix provides the financial services industry with an innovative and practical tool for addressing cross-cultural challenges and developing successful marketing strategies for entry into foreign markets.
Resumo:
Twenty first century learners operate in organic, immersive environments. A pedagogy of student-centred learning is not a recipe for rooms. A contemporary learning environment is like a landscape that grows, morphs, and responds to the pressures of the context and micro-culture. There is no single adaptable solution, nor a suite of off-the-shelf answers; propositions must be customisable and infinitely variable. They must be indeterminate and changeable; based on the creation of learning places, not restrictive or constraining spaces. A sustainable solution will be un-fixed, responsive to the life cycle of the components and materials, able to be manipulated by the users; it will create and construct its own history. Learning occurs as formal education with situational knowledge structures, but also as informal learning, active learning, blended learning social learning, incidental learning, and unintended learning. These are not spatial concepts but socio-cultural patterns of discovery. Individual learning requirements must run free and need to be accommodated as the learner sees fit. The spatial solution must accommodate and enable a full array of learning situations. It is a system not an object. Three major components: 1. The determinate landscape: in-situ concrete 'plate' that is permanent. It predates the other components of the system and remains as a remnant/imprint/fossil after the other components of the system have been relocated. It is a functional learning landscape in its own right; enabling a variety of experiences and activities. 2. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context. Manufactured to the principles of design-for-disassembly. A symbiotic barnacle like system that attaches itself to the existing infrastructure through the determinate landscape which acts as a fast growth rhizome. A carapace of protective panels, infinitely variable to create enclosed, semi-enclosed, and open learning places. 3. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. Four main types of stations; wet-room learning centres, dry-room learning centres, ablutions, and low-impact building services. Entirely customised at the factory and delivered to site. The stations can be retro-fitted to suit a new context during relocation. Principles of design for disassembly: material principles • use recycled and recyclable materials • minimise the number of types of materials • no toxic materials • use lightweight materials • avoid secondary finishes • provide identification of material types component principles • minimise/standardise the number of types of components • use mechanical not chemical connections • design for use of common tools and equipment • provide easy access to all components • make component size to suite means of handling • provide built in means of handling • design to realistic tolerances • use a minimum number of connectors and a minimum number of types system principles • design for durability and repeated use • use prefabrication and mass production • provide spare components on site • sustain all assembly and material information
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
Across central Australia and south-west Queensland, a large (~800,000km2) subsurface temperature anomaly occurs (Figure 1). Temperatures are interpreted to be greater than 235°C at 5km depth, ca. 85°C higher than the average geothermal gradient for the upper continental crust (Chopra & Holgate, 2005; Holgate & Gerner, 2011). This anomaly has driven the development of Engineered Geothermal Systems (EGS) at Innamincka, where high temperatures have been related to the radiogenic heat production of High Heat Producing Granites (HHPG) at depth, below thermally insulative sedimentary cover (Chopra & Holgate, 2005; Draper & D’Arcy, 2006; Meixner & Holgate, 2009). To evaluate the role of granitic rocks at depth in generating the broader temperature anomaly in SW-Queensland, we sampled 25 granitic rocks from basement intervals of petroleum drill cores below thermal insulative cover along two transects (WNW–ESE and NNE–SSW — Figure 1) and performed a multidisciplinary study involving petrography, whole-rock chemistry, zircon dating and thermal conductivity measurements.
Resumo:
As the world’s population is growing, so is the demand for agricultural products. However, natural nitrogen (N) fixation and phosphorus (P) availability cannot sustain the rising agricultural production, thus, the application of N and P fertilisers as additional nutrient sources is common. It is those anthropogenic activities that can contribute high amounts of organic and inorganic nutrients to both surface and groundwaters resulting in degradation of water quality and a possible reduction of aquatic life. In addition, runoff and sewage from urban and residential areas can contain high amounts of inorganic and organic nutrients which may also affect water quality. For example, blooms of the cyanobacterium Lyngbya majuscula along the coastline of southeast Queensland are an indicator of at least short term decreases of water quality. Although Australian catchments, including those with intensive forms of land use, show in general a low export of nutrients compared to North American and European catchments, certain land use practices may still have a detrimental effect on the coastal environment. Numerous studies are reported on nutrient cycling and associated processes on a catchment scale in the Northern Hemisphere. Comparable studies in Australia, in particular in subtropical regions are, however, limited and there is a paucity in the data, in particular for inorganic and organic forms of nitrogen and phosphorus; these nutrients are important limiting factors in surface waters to promote algal blooms. Therefore, the monitoring of N and P and understanding the sources and pathways of these nutrients within a catchment is important in coastal zone management. Although Australia is the driest continent, in subtropical regions such as southeast Queensland, rainfall patterns have a significant effect on runoff and thus the nutrient cycle at a catchment scale. Increasingly, these rainfall patterns are becoming variable. The monitoring of these climatic conditions and the hydrological response of agricultural catchments is therefore also important to reduce the anthropogenic effects on surface and groundwater quality. This study consists of an integrated hydrological–hydrochemical approach that assesses N and P in an environment with multiple land uses. The main aim is to determine the nutrient cycle within a representative coastal catchment in southeast Queensland, the Elimbah Creek catchment. In particular, the investigation confirms the influence associated with forestry and agriculture on N and P forms, sources, distribution and fate in the surface and groundwaters of this subtropical setting. In addition, the study determines whether N and P are subject to transport into the adjacent estuary and thus into the marine environment; also considered is the effect of local topography, soils and geology on N and P sources and distribution. The thesis is structured on four components individually reported. The first paper determines the controls of catchment settings and processes on stream water, riverbank sediment, and shallow groundwater N and P concentrations, in particular during the extended dry conditions that were encountered during the study. Temporal and spatial factors such as seasonal changes, soil character, land use and catchment morphology are considered as well as their effect on controls over distributions of N and P in surface waters and associated groundwater. A total number of 30 surface and 13 shallow groundwater sampling sites were established throughout the catchment to represent dominant soil types and the land use upstream of each sampling location. Sampling comprises five rounds and was conducted over one year between October 2008 and November 2009. Surface water and groundwater samples were analysed for all major dissolved inorganic forms of N and for total N. Phosphorus was determined in the form of dissolved reactive P (predominantly orthophosphate) and total P. In addition, extracts of stream bank sediments and soil grab samples were analysed for these N and P species. Findings show that major storm events, in particular after long periods of drought conditions, are the driving force of N cycling. This is expressed by higher inorganic N concentrations in the agricultural subcatchment compared to the forested subcatchment. Nitrate N is the dominant inorganic form of N in both the surface and groundwaters and values are significantly higher in the groundwaters. Concentrations in the surface water range from 0.03 to 0.34 mg N L..1; organic N concentrations are considerably higher (average range: 0.33 to 0.85 mg N L..1), in particular in the forested subcatchment. Average NO3-N in the groundwater has a range of 0.39 to 2.08 mg N L..1, and organic N averages between 0.07 and 0.3 mg N L..1. The stream bank sediments are dominated by organic N (range: 0.53 to 0.65 mg N L..1), and the dominant inorganic form of N is NH4-N with values ranging between 0.38 and 0.41 mg N L..1. Topography and soils, however, were not to have a significant effect on N and P concentrations in waters. Detectable phosphorus in the surface and groundwaters of the catchment is limited to several locations typically in the proximity of areas with intensive animal use; in soil and sediments, P is negligible. In the second paper, the stable isotopes of N (14N/15N) and H2O (16O/18O and 2H/H) in surface and groundwaters are used to identify sources of dissolved inorganic and organic N in these waters, and to determine their pathways within the catchment; specific emphasis is placed on the relation of forestry and agriculture. Forestry is predominantly concentrated in the northern subcatchment (Beerburrum Creek) while agriculture is mainly found in the southern subcatchment (Six Mile Creek). Results show that agriculture (horticulture, crops, grazing) is the main source of inorganic N in the surface waters of the agricultural subcatchment, and their isotopic signature shows a close link to evaporation processes that may occur during water storage in farm dams that are used for irrigation. Groundwaters are subject to denitrification processes that may result in reduced dissolved inorganic N concentrations. Soil organic matter delivers most of the inorganic N to the surface water in the forested subcatchment. Here, precipitation and subsequently runoff is the main source of the surface waters. Groundwater in this area is affected by agricultural processes. The findings also show that the catchment can attenuate the effects of anthropogenic land use on surface water quality. Riparian strips of natural remnant vegetation, commonly 50 to 100 m in width, act as buffer zones along the drainage lines in the catchment and remove inorganic N from the soil water before it enters the creek. These riparian buffer zones are common in most agricultural catchments of southeast Queensland and are indicated to reduce the impact of agriculture on stream water quality and subsequently on the estuary and marine environments. This reduction is expressed by a significant decrease in DIN concentrations from 1.6 mg N L..1 to 0.09 mg N L..1, and a decrease in the �15N signatures from upstream surface water locations downstream to the outlet of the agricultural subcatchment. Further testing is, however, necessary to confirm these processes. Most importantly, the amount of N that is transported to the adjacent estuary is shown to be negligible. The third and fourth components of the thesis use a hydrological catchment model approach to determine the water balance of the Elimbah Creek catchment. The model is then used to simulate the effects of land use on the water balance and nutrient loads of the study area. The tool that is used is the internationally widely applied Soil and Water Assessment Tool (SWAT). Knowledge about the water cycle of a catchment is imperative in nutrient studies as processes such as rainfall, surface runoff, soil infiltration and routing of water through the drainage system are the driving forces of the catchment nutrient cycle. Long-term information about discharge volumes of the creeks and rivers do, however, not exist for a number of agricultural catchments in southeast Queensland, and such information is necessary to calibrate and validate numerical models. Therefore, a two-step modelling approach was used to calibrate and validate parameters values from a near-by gauged reference catchment as starting values for the ungauged Elimbah Creek catchment. Transposing monthly calibrated and validated parameter values from the reference catchment to the ungauged catchment significantly improved model performance showing that the hydrological model of the catchment of interest is a strong predictor of the water water balance. The model efficiency coefficient EF shows that 94% of the simulated discharge matches the observed flow whereas only 54% of the observed streamflow was simulated by the SWAT model prior to using the validated values from the reference catchment. In addition, the hydrological model confirmed that total surface runoff contributes the majority of flow to the surface water in the catchment (65%). Only a small proportion of the water in the creek is contributed by total base-flow (35%). This finding supports the results of the stable isotopes 16O/18O and 2H/H, which show the main source of water in the creeks is either from local precipitation or irrigation waters delivered by surface runoff; a contribution from the groundwater (baseflow) to the creeks could not be identified using 16O/18O and 2H/H. In addition, the SWAT model calculated that around 68% of the rainfall occurring in the catchment is lost through evapotranspiration reflecting the prevailing long-term drought conditions that were observed prior and during the study. Stream discharge from the forested subcatchment was an order of magnitude lower than discharge from the agricultural Six Mile Creek subcatchment. A change in land use from forestry to agriculture did not significantly change the catchment water balance, however, nutrient loads increased considerably. Conversely, a simulated change from agriculture to forestry resulted in a significant decrease of nitrogen loads. The findings of the thesis and the approach used are shown to be of value to catchment water quality monitoring on a wider scale, in particular the implications of mixed land use on nutrient forms, distributions and concentrations. The study confirms that in the tropics and subtropics the water balance is affected by extended dry periods and seasonal rainfall with intensive storm events. In particular, the comprehensive data set of inorganic and organic N and P forms in the surface and groundwaters of this subtropical setting acquired during the one year sampling program may be used in similar catchment hydrological studies where these detailed information is missing. Also, the study concludes that riparian buffer zones along the catchment drainage system attenuate the transport of nitrogen from agricultural sources in the surface water. Concentrations of N decreased from upstream to downstream locations and were negligible at the outlet of the catchment.
Resumo:
Odours emitted by flowers are complex blends of volatile compounds. These odours are learnt by flower-visiting insect species, improving their recognition of rewarding flowers and thus foraging efficiency. We investigated the flexibility of floral odour learning by testing whether adult moths recognize single compounds common to flowers on which they forage. Dual choice preference tests on Helicoverpa armigera moths allowed free flying moths to forage on one of three flower species; Argyranthemum frutescens (federation daisy), Cajanus cajan (pigeonpea) or Nicotiana tabacum (tobacco). Results showed that, (i) a benzenoid (phenylacetaldehyde) and a monoterpene (linalool) were subsequently recognized after visits to flowers that emitted these volatile constituents, (ii) in a preference test, other monoterpenes in the flowers' odour did not affect the moths' ability to recognize the monoterpene linalool and (iii) relative preferences for two volatiles changed after foraging experience on a single flower species that emitted both volatiles. The importance of using free flying insects and real flowers to understand the mechanisms involved in floral odour learning in nature are discussed in the context of our findings.
Resumo:
The purpose of this investigation is to present an overview of roadside drug driving enforcement and detections in Queensland, Australia since the introduction of oral fluid screening. Drug driving is a problematic issue for road safety and investigations of the prevalence and impact of drug driving suggest that, in particular, the use of illicit drugs may increase a driver’s involvement in a road crash when compared to a driver who is drug free. In response to the potential increased crash involvement of drug impaired drivers, Australian police agencies have adopted the use of oral fluid analysis to detect the presence of illicit drugs in drivers. This paper describes the results of roadside drug testing for over 80,000 drivers in Queensland, Australia, from December 2007 to June 2012. It provides unique data on the prevalence of methamphetamine, cannabis and ecstasy in the screened population for the period. When prevalence rates are examined over time, drug driving detection rates have almost doubled from around 2.0% at the introduction of roadside testing operations to just under 4.0% in the latter years. The most common drug type detected was methamphetamine (40.8%) followed by cannabis (29.8%) and methamphetamine/cannabis combination (22.5%). By comparison, the rate of ecstasy detection was very low (1.7%). The data revealed a number of regional, age and gender patterns and variations of drug driving across the state. Younger drivers were more likely to test positive for cannabis whilst older drivers were more likely to test positive for methamphetamine. The overall characteristics of drivers who tested positive to the presence of at least one of the target illicit drugs are they are likely to be male, aged 30-39 years, be driving a car on Friday, Saturday or Sunday between 6:00PM and 6:00AM and to test positive for methamphetamine.
Resumo:
Numerical simulations of thermomagnetic convection of paramagnetic fluids placed in a micro-gravity condition (g ≈ 0) and under a uniform vertical gradient magnetic field in an open ended square enclosure with ramp heating temperature condition applied on a vertical wall is investigated in this study. In presence of the strong magnetic gradient field thermal convection of the paramagnetic fluid might take place even in a zero-gravity environment as a direct consequence of temperature differences occurring within the fluid. The thermal boundary layer develops adjacent to the hot wall as soon as the ramp temperature condition is applied on it. There are two scenarios can be observed based on the ramp heating time. The steady state of the thermal boundary layer can be reached before the ramp time is finished or vice versa. If the ramp time is larger than the quasi-steady time then the thermal boundary layer is in a quasi-steady mode with convection balancing conduction after the quasi-steady time. Further increase of the heat input simply accelerates the flow to maintain the proper thermal balance. Finally, the boundary layer becomes completely steady state when the ramp time is finished. Effects of magnetic Rayleigh number, Prandtl number and paramagnetic fluid parameter on the flow pattern and heat transfer are presented.
Resumo:
Australia's economic growth and national identity have been widely celebrated as being founded on the nation's natural resources. With the golden era of pastoralism fading into the distance, a renewed love affair with primary industries has been much lauded, particularly by purveyors of neoliberal ideology. The considerable wealth generated by resource extraction has, despite its environmental and social record, proved seductive to the university sector. The mining industry is one of a number of industries and sectors (alongside pharmaceutical, chemical and biotechnological) that is increasingly courting Australian universities. These new public-private alliances are often viewed as the much-needed cash cow to bridge the public funding shortfall in the tertiary sector. However, this trend also raises profound questions about the capacity of public good institutions, as universities were once assumed to be, to maintain institutional independence and academic freedoms.
Resumo:
Flows of cultural heritage in textual practices are vital to sustaining Indigenous communities. Indigenous heritage, whether passed on by oral tradition or ubiquitous social media, can be seen as a “conversation between the past and the future” (Fairclough, 2012, xv). Indigenous heritage involves appropriating memories within a cultural flow to pass on a spiritual legacy. This presentation reports ethnographic research of social media practices in a small independent Aboriginal school in Southeast Queensland, Australia that is resided over by the Yugambeh elders and an Aboriginal principal. The purpose was to rupture existing notions of white literacies in schools, and to deterritorialize the uses of digital media by dominant cultures in the public sphere. Examples of learning experiences included the following: i. Integrating Indigenous language and knowledge into media text production; ii. Using conversations with Indigenous elders and material artifacts as an entry point for storytelling; iii. Dadirri – spiritual listening in the yarning circle to develop storytelling (Ungunmerr-Baumann, 2002); and iv. Writing and publicly sharing oral histories through digital scrapbooking shared via social media. The program aligned with the Australian National Curriculum English (ACARA, 2012), which mandates the teaching of multimodal text creation. Data sources included a class set of digital scrapbooks collaboratively created in a multi-age primary classroom. The digital scrapbooks combined digitally encoded words, images of material artifacts, and digital music files. A key feature of the writing and digital design task was to retell and digitally display and archive a cultural narrative of significance to the Indigenous Australian community and its memories and material traces of the past for the future. Data analysis of the students’ digital stories involved the application of key themes of negotiated, material, and digitally mediated forms of heritage practice. It drew on Australian Indigenous research by Keddie et al. (2013) to guard against the homogenizing of culture that can arise from a focus on a static view of culture. The interpretation of findings located Indigenous appropriation of social media within broader racialized politics that enables Indigenous literacy to be understood as a dynamic, negotiated, and transgenerational flows of practice. The findings demonstrate that Indigenous children’s use of media production reflects “shifting and negotiated identities” in response to changing media environments that can function to sustain Indigenous cultural heritages (Appadurai, 1696, xv). It demonstrated how the children’s experiences of culture are layered over time, as successive generations inherit, interweave, and hear others’ cultural stories or maps. It also demonstrated how the children’s production of narratives through multimedia can provide a platform for the flow and reconstruction of performative collective memories and “lived traces of a common past” (Giaccardi, 2012). It disrupts notions of cultural reductionism and racial incommensurability that fix and homogenize Indigenous practices within and against a dominant White norm. Recommendations are provided for an approach to appropriating social media in schools that explicitly attends to the dynamic nature of Indigenous practices, negotiated through intercultural constructions and flows, and opening space for a critical anti-racist approach to multimodal text production.
Resumo:
A pilot experiment was performed using the WOMBAT powder diffraction instrument at ANSTO in which the first neutron diffraction peak (Q0) was measured for D2O flowing in a 2 mm internal diameter aluminium tube. Measurements of Q0 were made at -9, 4.3, 6.9, 12, 18.2 and 21.5 °C. The D2O was circulated using a siphon with water in the lower reservoir returned to the upper reservoir using a small pump. This enabled stable flow to be maintained for several hours. For example, if the pump flow increased slightly, the upper reservoir level rose, increasing the siphon flow until it matched the return flow. A neutron wavelength of 2.4 Å was used and data integrated over 60 minutes for each temperature. A jet of nitrogen from a liquid N2 Dewar was directed over the aluminium tube to vary water temperature. After collection of the data, the d spacing of the aluminium peaks was used to calculate the temperature of the aluminium within the neutron beam and therefore was considered to be an accurate measure of water temperature within the beam. Sigmaplot version 12.3 was used to fit a Weibull five parameter peak fit to the first neutron diffraction peak. The values of Q0 obtained in this experiment showed an increase with temperature consistent with data in the literature [1] but were consistently higher than published values for bulk D20. For example at 21.5 °C we obtained a value of 2.008 Å-1 for Q0 compared to a literature value of 1.988 Å-1 for bulk D2O at 20 °C, a difference of 1%. Further experiments are required to see if this difference is real or artifactual.