929 resultados para critical patch size
Resumo:
This handbook chapter explores the relationship between critical theory and seminal studies of literacy which investigate inequities in education. It identifies new research questions to explore the connections between literacy and power, but go beyond promises of emancipation.
Resumo:
This paper describes a program called Patches that was implemented to assist a group of Australian and Malaysian pre-service teachers to enhance their intercultural competence through their involvement in a series of reciprocal learning activities. Each learning experience was considered a “patch” that eventually created a “quilt of intercultural learning.” The purpose of this study was to enhance the intercultural competence of domestic and international students through organized intercultural activities, through a series of reflective writing sessions, and mutual engagement on a common project. The effectiveness of the Patches program was analysed in accordance with Deardorff’s elements of intercultural competence. The qualitative findings indicate that both cohorts of preservice teachers showed elements of intercultural competence through participation in the program, with both groups reporting a deeper appreciation and understanding of how to communicate more effectively in intercultural contexts.
Resumo:
Vietnam has a unique culture which is revealed in the way that people have built and designed their traditional housing. Vietnamese dwellings reflect occupants’ activities in their everyday lives, while adapting to tropical climatic conditions impacted by seasoning monsoons. It is said that these characteristics of Vietnamese dwellings have remained unchanged until the economic reform in 1986, when Vietnam experienced an accelerated development based on the market-oriented economy. New housing types, including modern shop-houses, detached houses, and apartments, have been designed in many places, especially satisfying dwellers’ new lifestyles in Vietnamese cities. The contemporary housing, which has been mostly designed by architects, has reflected rules of spatial organisation so that occupants’ social activities are carried out. However, contemporary housing spaces seem unsustainable in relation to socio-cultural values because they has been influenced by globalism that advocates the use of homogeneous spatial patterns, modern technologies, materials and construction methods. This study investigates the rules of spaces in Vietnamese houses that were built before and after the reform to define the socio-cultural implications in Vietnamese housing design. Firstly, it describes occupants’ views of their current dwellings in terms of indoor comfort conditions and social activities in spaces. Then, it examines the use of spaces in pre-reform Vietnamese housing through occupants’ activities and material applications. Finally, it discusses the organisation of spaces in both pre- and post-reform housing to understand how Vietnamese housing has been designed for occupants to live, act, work, and conduct traditional activities. Understanding spatial organisation is a way to identify characteristics of the lived spaces of the occupants created from the conceived space, which is designed by designers. The characteristics of the housing spaces will inform the designers the way to design future Vietnamese housing in response to cultural contexts. The study applied an abductive approach for the investigation of housing spaces. It used a conceptual framework in relation to Henri Lefebvre’s (1991) theory to understand space as the main factor constituting the language of design, and the principles of semiotics to examine spatial structure in housing as a language used in the everyday life. The study involved a door-knocking survey to 350 households in four regional cities of Vietnam for interpretation of occupancy conditions and levels of occupants’ comfort. A statistical analysis was applied to interpret the survey data. The study also required a process of data selection and collection of fourteen cases of housing in three main climatic regions of the country for analysing spatial organisation and housing characteristics. The study found that there has been a shift in the relationship of spaces from the pre- to post-reform Vietnamese housing. It also indentified that the space for guest welcoming and family activity has been the central space of the Vietnamese housing. Based on the relationships of the central space with the others, theoretical models were proposed for three types of contemporary Vietnamese housing. The models will be significant in adapting to Vietnamese conditions to achieve socioenvironmental characteristics for housing design because it was developed from the occupants’ requirements for their social activities. Another contribution of the study is the use of methodological concepts to understand the language of living spaces. Further work will be needed to test future Vietnamese housing designs from the applications of the models.
Resumo:
Wheel-rail interaction is one of the most important research topics in railway engineering. It includes track vibration, track impact response and safety of the track. Track structure failures caused by impact forces can lead to significant economic loss for track owners through damage to rails and to the sleepers beneath. The wheel-rail impact forces occur because of imperfections on the wheels or rails such as wheel flats, irregular wheel profile, rail corrugation and differences in the height of rails connected at a welded joint. The vehicle speed and static wheel load are important factors of the track design, because they are related to the impact forces under wheel-rail defects. In this paper, a 3-Dimensional finite element model for the study of wheel flat impact is developed by use of the FEA software package ANSYS. The effects of the wheel flat to impact force on sleepers with various speeds and static wheel loads under a critical wheel flat size are investigated. It has found that both wheel-rail impact force and impact force on sleeper induced by wheel flat are varying nonlinearly by increasing the vehicle speed; both impact forces are nonlinearly and monotonically increasing by increasing the static wheel load. The relationships between both of impact forces induced by wheel flat and vehicles speed or static load are important to the track engineers to improve the design and maintenance methods in railway industry.
Resumo:
With increasing demands on our time, everyday behaviors such as food purchasing, preparation, and consumption have become habitual and unconscious. Indeed, modern food values are focused on conve- nience and effortlessness, overshad- owing other values such as environ- mental sustainability, health, and pleasure. The rethinking of how we approach everyday food behaviors appears to be a particularly timely concern. In this special section, we explore work carried out and dis- cussed during the recent workshop “Food for Thought: Designing for Critical Reflection on Food Practices,” at the 2012 Designing Interactive Systems Conference in Newcastle upon Tyne, U.K.
Resumo:
Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.
Resumo:
There is growing and converging evidence that cannabis may be a major risk factor in people with psychotic disorders and prodromal psychotic symptoms. The lack of available pharmacological treatments for cannabis use indicates that psychological interventions should be a high priority, especially among people with psychotic disorders. However, there have been few randomised controlled trials (RCTs) of psychological interventions among this group. In the present study we critically overview RCTs of psychological and pharmacologic interventions among people with psychotic disorders, giving particular attention to those studies which report cannabis use outcomes. We then review data regarding treatment preferences among this group. RCTs of interventions within "real world" mental health systems among adults with severe mental disorders suggest that cannabis use is amenable to treatment in real world settings among people with psychotic disorders. RCTs of manual guided interventions among cannabis users indicate that while brief interventions are associated with reductions in cannabis use, longer interventions may be more effective. Additionally, RCTs reviewed suggest treatment with antipsychotic medication is not associated with a worsening of cannabis cravings or use and may be beneficial. The development of cannabinoid agonist medication may be an effective strategy for cannabis dependence and suitable for people with psychotic disorders. The development of cannabis use interventions for people with psychotic disorders should also consider patients' treatment preferences. Initial results indicate face-to-face interventions focussed on cannabis use may be preferred. Further research investigating the treatment preferences of people with psychotic disorders using cannabis is needed.
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
Background: Recent clinical studies have demonstrated an emerging subgroup of head and neck cancers that are virally mediated. This disease appears to be a distinct clinical entity with patients presenting younger and with more advanced nodal disease, having lower tobacco and alcohol exposure and highly radiosensitive tumours. This means they are living longer, often with the debilitating functional side effects of treatment. The primary objective of this study was to determine how virally mediated nasopharyngeal and oropharyngeal cancers respond to radiation therapy treatment. The aim was to determine risk categories and corresponding adaptive treatment management strategies to proactively manage these patients. Method/Results: 121 patients with virally mediated, node positive nasopharyngeal or oropharyngeal cancer who received radiotherapy treatment with curative intent between 2005 and 2010 were studied. Relevant patient demographics including age, gender, diagnosis, TNM stage, pre-treatment nodal size and dose delivered was recorded. Each patient’s treatment plan was reviewed to determine if another computed tomography (re-CT) scan was performed and at what time point (dose/fraction) this occurred. The justification for this re-CT was determined using four categories: tumour and/or nodal regression, weight loss, both or other. Patients who underwent a re-CT were further investigated to determine whether a new plan was calculated. If a re-plan was performed, the dosimetric effect was quantified by comparing dose volume histograms of planning target volumes and critical structures from the actual treatment delivered and the original treatment plan. Preliminary results demonstrated that 25/121 (20.7%) patients required a re-CT and that these re-CTs were performed between fractions 20 to 25 of treatment. The justification for these re-CTs consisted of a combination of tumour and/or nodal regression and weight loss. 16/25 (13.2%) patients had a replan calculated. 9 (7.4%) of these replans were implemented clinically due to the resultant dosimetric effect calculated. The data collected from this assessment was statistically analysed to identify the major determining factors for patients to undergo a re-CT and/or replan. Specific factors identified included nodal size and timing of the required intervention (i.e. how when a plan is to be adapted). This data was used to generate specific risk profiles that will form the basis of a biologically guided adaptive treatment management strategy for virally mediated head and neck cancer. Conclusion: Preliminary data indicates that virally mediated head and neck cancers respond significantly during radiation treatment (tumour and/or nodal regression and weight loss). Implications of this response are the potential underdosing or overdosing of tumour and/or surrounding critical structures. This could lead to sub-optimal patient outcomes and compromised quality of life. Consequently, the development of adaptive treatment strategies that improve organ sparing for this patient group is important to ensure delivery of the prescribed dose to the tumour volume whilst minimizing the dose received to surrounding critical structures. This could reduce side effects and improve overall patient quality of life. The risk profiles and associated adaptive treatment approaches developed in this study will be tested prospectively in the clinical setting in Phase 2 of this investigation.
Resumo:
During the last four decades, educators have created a range of critical literacy approaches for different contexts, including compulsory schooling (Luke & Woods, 2009) and second language education (Luke & Dooley, 2011). Despite inspirational examples of critical work with young students (e.g., O’Brien, 1994; Vasquez, 1994), Comber (2012) laments the persistent myth that critical literacy is not viable in the early years. Assumptions about childhood innocence and the priorities of the back-to-basics movement seem to limit the possibilities for early years literacy teaching and learning. Yet, teachers of young students need not face an either/or choice between the basic and critical dimensions of literacy. Systematic ways of treating literacy in all its complexity exist. We argue that the integrative imperative is especially important in schools that are under pressure to improve technical literacy outcomes. In this chapter, we document how critical literacy was addressed in a fairytales unit taught to 4.5 - 5.5 year olds in a high diversity, high poverty Australian school. We analyze the affordances and challenges of different approaches to critical literacy, concluding they are complementary rather than competing sources of possibility. Furthermore, we make the case for turning familiar classroom activities to critical ends.
Resumo:
Introduction: Clinical investigation has revealed a subgroup of head and neck cancers that are virally mediated. The relationship between nasopharyngeal cancer and Epstein Barr Virus (EBV) has long been established and more recently, the association between oropharyngeal cancer and Human Papillomavirus (HPV) has been revealed1,2 These cancers often present with nodal involvement and generally respond well to radiation treatment, evidenced by tumour regression1. This results in the need for treatment plan adaptation or re-planning in a subset of patients. Adaptive techniques allow the target region of the radiotherapy treatment plan to be altered in accordance with treatment-induced changes to ensure that under or over dosing does not occur3. It also assists in limiting potential overdosing of surrounding critical normal tissues4. We sought to identify a high-risk group based on nodal size to be evaluated in a future prospective adaptive radiotherapy trial. Method: Between 2005-2010, 121 patients with virally mediated, node positive nasopharyngeal (EBV positive) or oropharyngeal (HPV positive) cancers, receiving curative intent radiotherapy treatment were reviewed. Patients were analysed based on maximum size of the dominant node at diagnosis with a view to grouping them in varying risk categories to determine the need of re-planning. The frequency and timing of the re-planning scans were also evaluated. Results: Sixteen nasopharyngeal and 105 oropharyngeal tumours were reviewed. Twenty-five (21%) patients underwent a re-planning CT at a median of 22 (range, 0-29) fractions with 1 patient requiring re-planning prior to the commencement of treatment. Based on the analysis, patients were subsequently placed into risk categories; ≤35mm (Group 1), 36-45mm (Group 2), ≥46mm (Group 3). Re-planning CT’s were performed in Group 1- 8/68 (11.8%), Group 2- 4/28 (14.3%), Group 3- 13/25 (52%). Conclusion: In this series, patients with virally mediated head and neck cancer and nodal size > 46mm appear to be a high-risk group for the need of re-planning during a course of curative radiotherapy. This finding will now be tested in a prospective adaptive radiotherapy study. ‘Real World’ Implications: This research identifies predictive factors for those patients with virally mediated head and neck cancer that will benefit most from treatment adaptation. This will assist in minimising the side effects experienced by these patients thereby improving their quality of life after treatment.
Resumo:
Background: The size of the carrier influences drug aerosolization from a dry powder inhaler (DPI) formulation. Lactose particles with irregular shape and rough surface in a variety of sizes are additionally used as carriers; however, contradictory reports exist regarding the effect of carrier size on the dispersion of drug. We examined the influence of the spherical particle size of the biodegradable polylactide-co-glycolide (PLGA) carrier on the aerosolization of a model drug, salbutamol sulphate (SS). Methods: Four different sizes (20-150 µm) of polymer carriers were fabricated using solvent evaporation technique and the dispersion of SS from these carriers was measured by a Twin Stage Impinger (TSI). The size and morphological properties of polymer carriers were determined by laser diffraction and SEM, respectively. Results: The FPF was found to increase from 5.6% to 21.3% with increasing carrier sizeup to150 µm. Conclusions: The aerosolization of drug increased linearly with the size of polymer carriers. For a fixed mass of drug particles in a formulation, the mass of drug particles per unit area of carriers is higher in formulations containing the larger carriers, which leads to an increase in the dispersion of drug due to the increased mechanical forces occurred between the carriers and the device walls.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
YBa2Cu3O7-x wires have been extruded with 2 and 5 wt.% of hydroxy propyl methylcellulose (HPMC) as binder. Both sets of wires sintered below 930°C have equiaxed grains while the wires sintered above this temperature have elongated grains. In the temperature range which gives equiaxed grains, the wires extruded with 5 wt.% HPMC have higher grain size and density. Cracks along the grain boundaries are often observed in the wires having elongated grains. Critical current density, Jc, increases initially, reaches a peak and then decreases with the sintering temperature. The sintering temperature giving a peak in Jc strongly depends on the heat treatment scheme for the wires extruded with 5 wt.% HPMC. TEM studies show that defective layers are formed along grain boundaries for the wires extruded with 5 wt.% HPMC after 5 h oxygenation. After 55 h oxygenation, the defective layers become more localised and grain boundaries adopt an overall cleaner appearance. Densification with equiaxed grains and clean grain boundaries produces the highest Jc's for polycrystalline YBa2Cu3O7 wires.
Resumo:
YBCO wires which consist of well oriented plate-like fine grains are fabricated using a moving furnace to achieve higher mechanical strength. Melt-texturing experiments have been undertaken on YBCO wires with two different compositions: YBa1.5Cu2.9O7-x, and YBa1.8Cu3.0O7-x. Wires are extruded from a mixture of precursor powders (formed by a coprecipitation process) then textured by firing in a moving furnace. Size of secondary phases such as barium cuprate and copper oxide, and overall composition of the sample affect the orientation of the fine grains. At zero magnetic field, the YBa1.5Cu2.9O7-x wire shows the highest critical current density of 1,450 Acm-2 and 8,770 Acm-2 at 77K and 4.2K, respectively. At 1 T, critical current densities of 30 Acm-2 and 200 Acm-2, respectively, are obtained at 77K and 4.2K. Magnetisation curves are also obtained for one sample to evaluate critical current density using the Bean model. Analysis of the microstructure indicates that the starting composition of the green body significantly affects the achievement of grain alignment via melt-texturing processes.