891 resultados para heterogeneous UAVs
The increased popularity of mopeds and motor scooters : exploring usage patterns and safety outcomes
Resumo:
Increased use of powered two-wheelers (PTWs) often underlies increases in the number of reported crashes, promoting research into PTW safety. PTW riders are overrepresented in crash and injury statistics relative to exposure and, as such, are considered vulnerable road users. PTW use has increased substantially over the last decade in many developed countries. One such country is Australia, where moped and scooter use has increased at a faster rate than motorcycle use in recent years. Increased moped use is particularly evident in the State of Queensland which is one of four Australian jurisdictions where moped riding is permitted for car licence holders and a motorcycle licence is not required. A moped is commonly a small motor scooter and is limited to a maximum design speed of 50 km/h and a maximum engine cylinder capacity of 50 cubic centimetres. Scooters exceeding either of these specifications are classed as motorcycles in all Australian jurisdictions. While an extensive body of knowledge exists on motorcycle safety, some of which is relevant to moped and scooter safety, the latter PTW types have received comparatively little focused research attention. Much of the research on moped safety to date has been conducted in Europe where they have been popular since the mid 20th century, while some studies have also been conducted in the United States. This research is of limited relevance to Australia due to socio-cultural, economic, regulatory and environmental differences. Moreover, while some studies have compared motorcycles to mopeds in terms of safety, no research to date has specifically examined the differences and similarities between mopeds and larger scooters, or between larger scooters and motorcycles. To address the need for a better understanding of moped and scooter use and safety, the current program of research involved three complementary studies designed to achieve the following aims: (1) develop better knowledge and understanding of moped and scooter usage trends and patterns; and (2) determine the factors leading to differences in moped, scooter and motorcycle safety. Study 1 involved six-monthly observations of PTW types in inner city parking areas of Queensland’s capital city, Brisbane, to monitor and quantify the types of PTW in use over a two year period. Study 2 involved an analysis of Queensland PTW crash and registration data, primarily comparing the police-reported crash involvement of mopeds, scooters and motorcycles over a five year period (N = 7,347). Study 3 employed both qualitative and quantitative methods to examine moped and scooter usage in two components: (a) four focus group discussions with Brisbane-based Queensland moped and scooter riders (N = 23); and (b) a state-wide survey of Queensland moped and scooter riders (N = 192). Study 1 found that of the PTW types parked in inner city Brisbane over the study period (N = 2,642), more than one third (36.1%) were mopeds or larger scooters. The number of PTWs observed increased at each six-monthly phase, but there were no significant changes in the proportions of PTW types observed across study phases. There were no significant differences in the proportions or numbers of PTW type observed by season. Study 2 revealed some important differences between mopeds, scooters and motorcycles in terms of safety and usage through analysis of crash and registration data. All Queensland PTW registrations doubled between 2001 and 2009, but there was an almost fifteen-fold increase in moped registrations. Mopeds subsequently increased as a proportion of Queensland registered PTWs from 1.2 percent to 8.8 percent over this nine year period. Moped and scooter crashes increased at a faster rate than motorcycle crashes over the five year study period from July 2003 to June 2008, reflecting their relatively greater increased usage. Crash rates per 10,000 registrations for the study period were only slightly higher for mopeds (133.4) than for motorcycles and scooters combined (124.8), but estimated crash rates per million vehicle kilometres travelled were higher for mopeds (6.3) than motorcycles and scooters (1.7). While the number of crashes increased for each PTW type over the study period, the rate of crashes per 10,000 registrations declined by 40 percent for mopeds compared with 22 percent for motorcycles and scooters combined. Moped and scooter crashes were generally less severe than motorcycle crashes and this was related to the particular crash characteristics of the PTW types rather than to the PTW types themselves. Compared to motorcycle and moped crashes, scooter crashes were less likely to be single vehicle crashes, to involve a speeding or impaired rider, to involve poor road conditions, or to be attributed to rider error. Scooter and moped crashes were more likely than motorcycle crashes to occur on weekdays, in lower speed zones and at intersections. Scooter riders were older on average (39) than moped (32) and motorcycle (35) riders, while moped riders were more likely to be female (36%) than scooter (22%) or motorcycle riders (7%). The licence characteristics of scooter and motorcycle riders were similar, with moped riders more likely to be licensed outside of Queensland and less likely to hold a full or open licence. The PTW type could not be identified in 15 percent of all cases, indicating a need for more complete recording of vehicle details in the registration data. The focus groups in Study 3a and the survey in Study 3b suggested that moped and scooter riders are a heterogeneous population in terms of demographic characteristics, riding experience, and knowledge and attitudes regarding safety and risk. The self-reported crash involvement of Study 3b respondents suggests that most moped and scooter crashes result in no injury or minor injury and are not reported to police. Study 3 provided some explanation for differences observed in Study 2 between mopeds and scooters in terms of crash involvement. On the whole, scooter riders were older, more experienced, more likely to have undertaken rider training and to value rider training programs. Scooter riders were also more likely to use protective clothing and to seek out safety-related information. This research has some important practical implications regarding moped and scooter use and safety. While mopeds and scooters are generally similar in terms of usage, and their usage has increased, scooter riders appear to be safer than moped riders due to some combination of superior skills and safer riding behaviour. It is reasonable to expect that mopeds and scooters will remain popular in Queensland in future and that their usage may further increase, along with that of motorcycles. Future policy and planning should consider potential options for encouraging moped riders to acquire better riding skills and greater safety awareness. While rider training and licensing appears an obvious potential countermeasure, the effectiveness of rider training has not been established and other options should also be strongly considered. Such options might include rider education and safety promotion, while interventions could also target other road users and urban infrastructure. Future research is warranted in regard to moped and scooter safety, particularly where the use of those PTWs has increased substantially from low levels. Research could address areas such as rider training and licensing (including program evaluations), the need for more detailed and reliable data (particularly crash and exposure data), protective clothing use, risks associated with lane splitting and filtering, and tourist use of mopeds. Some of this research would likely be relevant to motorcycle use and safety, as well as that of mopeds and scooters.
Resumo:
1. Local extinctions in habitat patches and asymmetric dispersal between patches are key processes structuring animal populations in heterogeneous environments. Effective landscape conservation requires an understanding of how habitat loss and fragmentation influence demographic processes within populations and movement between populations. 2. We used patch occupancy surveys and molecular data for a rainforest bird, the logrunner (Orthonyx temminckii), to determine (i) the effects of landscape change and patch structure on local extinction; (ii) the asymmetry of emigration and immigration rates; (iii) the relative influence of local and between-population landscapes on asymmetric emigration and immigration; and (iv) the relative contributions of habitat loss and habitat fragmentation to asymmetric emigration and immigration. 3. Whether or not a patch was occupied by logrunners was primarily determined by the isolation of that patch. After controlling for patch isolation, patch occupancy declined in landscapes experiencing high levels of rainforest loss over the last 100 years. Habitat loss and fragmentation over the last century was more important than the current pattern of patch isolation alone, which suggested that immigration from neighbouring patches was unable to prevent local extinction in highly modified landscapes. 4. We discovered that dispersal between logrunner populations is highly asymmetric. Emigration rates were 39% lower when local landscapes were fragmented, but emigration was not limited by the structure of the between-population landscapes. In contrast, immigration was 37% greater when local landscapes were fragmented and was lower when the between-population landscapes were fragmented. Rainforest fragmentation influenced asymmetric dispersal to a greater extent than did rainforest loss, and a 60% reduction in mean patch area was capable of switching a population from being a net exporter to a net importer of dispersing logrunners. 5. The synergistic effects of landscape change on species occurrence and asymmetric dispersal have important implications for conservation. Conservation measures that maintain large patch sizes in the landscape may promote asymmetric dispersal from intact to fragmented landscapes and allow rainforest bird populations to persist in fragmented and degraded landscapes. These sink populations could form the kernel of source populations given sufficient habitat restoration. However, the success of this rescue effect will depend on the quality of the between-population landscapes.
Resumo:
Corticotropin releasing factor (CRF) has been shown to induce various behavioral changes related to adaptation to stress. Dysregulation of the CRF system at any point can lead to a variety of psychiatric disorders, including substance use disorders (SUDs). CRF has been associated with stress-induced drug reinforcement. Extensive literature has identified CRF to play an important role in the molecular mechanisms that lead to an increase in susceptibility that precipitates relapse to SUDs. The CRF system has a heterogeneous role in SUDs. It enhances the acute effects of drugs of abuse and is also responsible for the potentiation of drug-induced neuroplasticity evoked during the withdrawal period. We present in this review the brain regions and circuitries where CRF is expressed and may participate in stress-induced drug abuse. Finally, we attempt to evaluate the role of modulating the CRF system as a possible therapeutic strategy for treating the dysregulation of emotional behaviors that result from the acute positive reinforcement of substances of abuse as well as the negative reinforcement produced by withdrawal.
Resumo:
At present, many approaches have been proposed for deformable face alignment with varying degrees of success. However, the common drawback to nearly all these approaches is the inaccurate landmark registrations. The registration errors which occur are predominantly heterogeneous (i.e. low error for some frames in a sequence and higher error for others). In this paper we propose an approach for simultaneously aligning an ensemble of deformable face images stemming from the same subject given noisy heterogeneous landmark estimates. We propose that these initial noisy landmark estimates can be used as an “anchor” in conjunction with known state-of-the-art objectives for unsupervised image ensemble alignment. Impressive alignment performance is obtained using well known deformable face fitting algorithms as “anchors.
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data.
Resumo:
BACKGROUND The engineering profession in Australia has failed to attract young women for the last decade or so despite all the effort that have gone into promoting engineering as a preferred career choice for girls. It is a missed opportunity for the profession to flourish as a heterogeneous team. Many traditional initiatives and programs have failed to make much impact or at best incremental improvement into attracting and retaining more women in the profession. The reasons why girls and young women in most parts of the world show little interest in engineering haven't changed, despite all the efforts to address them, the issue proposed here in this paper is with the perceptions of engineering in the community and the confidence to pursue it. This gender imbalance is detrimental for the engineering profession, and hence an action-based intervention strategy was devised by the Women in Engineering Qld Chapter of Engineers Australia in 2012 to change the perceptions of school girls by redesigning the engagement strategy and key messages. As a result, the “Power of Engineering Inc” (PoE) was established as a not-for-profit organisation, and is a collaborative effort between government, schools, universities, and industry. This paper examines a case study in changing the perceptions of year 9 and 10 school girls towards an engineering career. PURPOSE To evaluate and determine the effectiveness of an intervention in changing the perceptions of year 9 and 10 school girls about engineering career options, but specifically, “What were their perceptions of engineering before today and have those perceptions changed?” DESIGN/METHOD The inaugural Power of Engineering (PoE) event was held on International Women’s Day, Thursday 8 March 2012 and was attended by 131 high school female students (year 9 and 10) and their teachers. The key message of the day was “engineering gives you the power to change the world”. A questionnaire was conducted with the participating high school female students, collecting both quantitative and qualitative data. The survey instrument has not been validated. RESULTS The key to the success of the event was as a result of collaboration between all participants involved and the connection created between government, schools, universities and industry. Of the returned surveys (109 of 131), 91% of girls would now consider a career in engineering and 57% who had not considered engineering before the day would now consider a career in engineering. Data collected found significant numbers of negative and varying perceptions about engineering careers prior to the intervention. CONCLUSIONS The evidence in this research suggests that the intervention assisted in changing the perceptions of year 9 and 10 female school students towards engineering as a career option. Whether this intervention translates into actual career selection and study enrolment is to be determined. In saying this, the evidence suggests that there is a critical and urgent need for earlier interventions prior to students selecting their subjects for year 11 and 12. This intervention could also play its part in increasing the overall pool of students engaged in STEM education.
Resumo:
This paper presents a novel evolutionary computation approach to three-dimensional path planning for unmanned aerial vehicles (UAVs) with tactical and kinematic constraints. A genetic algorithm (GA) is modified and extended for path planning. Two GAs are seeded at the initial and final positions with a common objective to minimise their distance apart under given UAV constraints. This is accomplished by the synchronous optimisation of subsequent control vectors. The proposed evolutionary computation approach is called synchronous genetic algorithm (SGA). The sequence of control vectors generated by the SGA constitutes to a near-optimal path plan. The resulting path plan exhibits no discontinuity when transitioning from curve to straight trajectories. Experiments and results show that the paths generated by the SGA are within 2% of the optimal solution. Such a path planner when implemented on a hardware accelerator, such as field programmable gate array chips, can be used in the UAV as on-board replanner, as well as in ground station systems for assisting in high precision planning and modelling of mission scenarios.
Resumo:
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".
Resumo:
Several major human pathogens, including the filoviruses, paramyxoviruses, and rhabdoviruses, package their single-stranded RNA genomes within helical nucleocapsids, which bud through the plasma membrane of the infected cell to release enveloped virions. The virions are often heterogeneous in shape, which makes it difficult to study their structure and assembly mechanisms. We have applied cryo-electron tomography and sub-tomogram averaging methods to derive structures of Marburg virus, a highly pathogenic filovirus, both after release and during assembly within infected cells. The data demonstrate the potential of cryo-electron tomography methods to derive detailed structural information for intermediate steps in biological pathways within intact cells. We describe the location and arrangement of the viral proteins within the virion. We show that the N-terminal domain of the nucleoprotein contains the minimal assembly determinants for a helical nucleocapsid with variable number of proteins per turn. Lobes protruding from alternate interfaces between each nucleoprotein are formed by the C-terminal domain of the nucleoprotein, together with viral proteins VP24 and VP35. Each nucleoprotein packages six RNA bases. The nucleocapsid interacts in an unusual, flexible "Velcro-like" manner with the viral matrix protein VP40. Determination of the structures of assembly intermediates showed that the nucleocapsid has a defined orientation during transport and budding. Together the data show striking architectural homology between the nucleocapsid helix of rhabdoviruses and filoviruses, but unexpected, fundamental differences in the mechanisms by which the nucleocapsids are then assembled together with matrix proteins and initiate membrane envelopment to release infectious virions, suggesting that the viruses have evolved different solutions to these conserved assembly steps.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
Many older people have difficulties using modern consumer products due to increased product complexity both in terms of functionality and interface design. Previous research has shown that older people have more difficulty in using complex devices intuitively when compared to the younger. Furthermore, increased life expectancy and a falling birth rate have been catalysts for changes in world demographics over the past two decades. This trend also suggests a proportional increase of older people in the work-force. This realisation has led to research on the effective use of technology by older populations in an effort to engage them more productively and to assist them in leading independent lives. Ironically, not enough attention has been paid to the development of interaction design strategies that would actually enable older users to better exploit new technologies. Previous research suggests that if products are designed to reflect people's prior knowledge, they will appear intuitive to use. Since intuitive interfaces utilise domain-specific prior knowledge of users, they require minimal learning for effective interaction. However, older people are very diverse in their capabilities and domain-specific prior knowledge. In addition, ageing also slows down the process of acquiring new knowledge. Keeping these suggestions and limitations in view, the aim of this study was set to investigate possible approaches to developing interfaces that facilitate their intuitive use by older people. In this quest to develop intuitive interfaces for older people, two experiments were conducted that systematically investigated redundancy (the use of both text and icons) in interface design, complexity of interface structure (nested versus flat), and personal user factors such as cognitive abilities, perceived self-efficacy and technology anxiety. All of these factors could interfere with intuitive use. The results from the first experiment suggest that, contrary to what was hypothesised, older people (65+ years) completed the tasks on the text only based interface design faster than on the redundant interface design. The outcome of the second experiment showed that, as expected, older people took more time on a nested interface. However, they did not make significantly more errors compared with younger age groups. Contrary to what was expected, older age groups also did better under anxious conditions. The findings of this study also suggest that older age groups are more heterogeneous in their capabilities and their intuitive use of contemporary technological devices is mediated more by domain-specific technology prior knowledge and by their cognitive abilities, than chronological age. This makes it extremely difficult to develop product interfaces that are entirely intuitive to use. However, by keeping in view the cognitive limitations of older people when interfaces are developed, and using simple text-based interfaces with flat interface structure, would help them intuitively learn and use complex technological products successfully during early encounter with a product. These findings indicate that it might be more pragmatic if interfaces are designed for intuitive learning rather than for intuitive use. Based on this research and the existing literature, a model for adaptable interface design as a strategy for developing intuitively learnable product interfaces was proposed. An adaptable interface can initially use a simple text only interface to help older users to learn and successfully use the new system. Over time, this can be progressively changed to a symbols-based nested interface for more efficient and intuitive use.
Resumo:
This thesis aims to contribute to a better understanding of how serious games/games for change function as learning frameworks for transformative learning in an educational setting. This study illustrates how the meaning-making processes and learning with and through computer gameplay are highly contingent, and are significantly influenced by the uncertainties of the situational context. The study focuses on SCAPE, a simulation game that addresses urban planning and sustainability. SCAPE is based on the real-world scenario of Kelvin Grove Urban Village, an inner city redevelopment area in Brisbane, Queensland, Australia. The game is embedded within an educational program, and I thus account for the various gameplay experiences of different school classes participating in this program. The networks emerging from the interactions between students/players, educators, facilitators, the technology, the researcher, as well as the setting, result in unanticipated, controversial, and sometimes unintended gameplay experiences and outcomes. To unpack play, transformative learning and games, this study adopts an ecological approach that considers the magic circle of gameplay in its wider context. Using Actor-Network Theory as the ontological lens for inquiry, the methods for investigation include an extensive literature review, ethnographic participant observation of SCAPE, as well as student and teacher questionnaires, finishing with interviews with the designers and facilitators of SCAPE. Altogether, these methods address my research aim to better understand how the heterogeneous actors engage in the relationships in and around gameplay, and illustrate how their conflicting understandings enable, shape or constrain the (transformative) learning experience. To disentangle these complexities, my focus continuously shifts between the following modes of inquiry into the aims „h To describe and analyse the game as a designed artefact. „h To examine the gameplay experiences of players/students and account for how these experiences are constituted in the relationships of the network. „h To trace the meaning-making processes emerging from the various relations of players/students, facilitators, teachers, designers, technology, researcher, and setting, and consider how the boundaries of the respective ecology are configured and negotiated. „h To draw out the implications for the wider research area of game-based learning by using the simulation game SCAPE as an example for introducing gameplay to educational settings. Accounting in detail for five school classes, these accounts represent, each in its own right, distinct and sometimes controversial forms of engagement in gameplay. The practices and negotiations of all the assembled human and non-human actors highlight the contingent nature of gameplay and learning. In their sum, they offer distinct but by no means exhaustive examples of the various relationships that emerge from the different assemblages of human and non-human actors. This thesis, hence, illustrates that game-based learning in an educational setting is accompanied by considerable unpredictability and uncertainty. As ordinary life spills and leaks into gameplay experiences, group dynamics and the negotiations of technology, I argue that overly deterministic assertions of the game¡¦s intention, as well as a too narrowly defined understanding of the transformative learning outcome, can constrain our inquiries and hinder efforts to further elucidate and understand the evolving uncertainties around game-based learning. Instead, this thesis posits that playing and transformative learning are relational effects of the respective ecology, where all actors are networked in their (partial) enrolment in the process of translation. This study thus attempts to foreground the rich opportunities for exploring how game-based learning is assembled as a network of practices.
Resumo:
Cu/Ni/W nanolayered composites with individual layer thickness ranging from 5nm to 300nm were prepared by a magnetron sputtering system. Microstructures and strength of the nanolayered composites were investigated by using the nanoindentation method combined with theoretical analysis. Microstructure characterization revealed that the Cu/Ni/W composite consists of a typical Cu/Ni coherent interface and Cu/W and Ni/W incoherent interfaces. Cu/Ni/W composites have an ultrahigh strength and a large strengthening ability compared with bi-constituent Cu–X(X¼Ni, W, Au, Ag, Cr, Nb, etc.) nanolayered composites. Summarizing the present results and those reported in the literature, we systematically analyze the origin of the ultrahigh strength and its length scale dependence by taking into account the constituent layer properties, layer scales and heterogeneous layer/layer interface characteristics, including lattice and modulus mismatch as well as interface structure.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.