225 resultados para Existence of optimal controls


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of ‘good governance’ in Indonesia’s regional government sector became a central tenet in governance research following the introduction of the national code for governance in 2006. The code was originally drafted in 1999 as a response to the Asian financial crises and many cases of unearthed corruption, collusion, and nepotism. It was reviewed in 2001 and again in 2006 to incorporate relevant political, economical, and social developments. Even though the national code exists along with many regional government decrees on good governance, the extent of implementation of the tenets of good governance in Indonesia’s regional government is still questioned. Previous research on good governance implementation in Indonesian regional government (Mardiasmo, Barnes and Sakurai, 2008) identified differences in the nature and depth of implementation between various Indonesian regional governments. This paper analyses and extends this recent work and explores key factors that may impede the implementation and sustained application of governance practices across regional settings. The bureaucratic culture of Indonesian regional government is one that has been shaped for over approximately 30 years, in particular during that of the Soeharto regime. Previous research on this regime suggests a bureaucratic culture with a mix of positive and negative aspects. On one hand Soeharto’s regime resulted in strong development growth and strong economic fundamentals, resulting in Indonesia being recognised as one of the Asian economic tigers prior to the 1997 Asian financial crises. The financial crises however revealed a bureaucratic culture that was rife with corruption, collusion, and nepotism. Although subsequent Indonesian governments have been committed to eradicating entrenched practices it seems apparent that the culture is ingrained within the bureaucracy and eradication of it will take time. Informants from regional government agree with this observation, as they identify good governance as an innovative mechanism and to implement it will mean a deviation from the “old ways.” Thus there is a need for a “changed” mind set in order to implement sustained governance practices. Such an exercise has proven to be challenging so far, as there is “hidden” resistance from within the bureaucracy to change its ways. The inertia of such bureaucratic cultures forms a tension against the opportunity for the implementation of good governance. From this context an emergent finding is the existence of a ‘bureaucratic generation gap’ as an impeding variable to enhanced and more efficient implementation of governance systems. It was found that after the Asian financial crises the Indonesian government (both at national and regional level) drew upon a wider human resources pool to fill government positions – including entrants from academia, the private sector, international institutions, foreign nationals and new graduates. It suggested that this change in human capital within government is at the core of this ‘inter-generational divide.’ This divergence is exemplified, at one extreme, by [older] bureaucrats who have been in-position for long periods of time serving during the extended Soeharto regime. The “new” bureaucrats have only sat in their positions since the end of Asian financial crisis and did not serve during Soeharto’s regime. It is argued that the existence of this generation gap and associated aspects of organisational culture have significantly impeded modernising governance practices across regional Indonesia. This paper examines the experiences of government employees in five Indonesian regions: Solok, Padang, Gorontalo, Bali, and Jakarta. Each regional government is examined using a mixed methodology comprising of on-site observation, document analysis, and iterative semi-structured interviewing. Drawing from the experiences of five regional governments in implementing good governance this paper seeks to better understand the causal contexts of variable implementation governance practices and to suggest enhancements to the development of policies for sustainable inter-generational change in governance practice across regional government settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The molecular structure of the uranyl mineral rutherfordine has been investigated by the measurement of the NIR and Raman spectra and complemented with infrared spectra including their interpretation. The spectra of the rutherfordine show the presence of both water and hydroxyl units in the structure as evidenced by IR bands at 3562 and 3465 cm-1 (OH) and 3343, 3185 and 2980 cm-1 (H2O). Raman spectra show the presence of four sharp bands at 3511, 3460, 3329 and 3151 cm-1. Corresponding molecular water bending vibrations were only observed in both Raman and infrared spectra of one of two studied rutherfordine samples. The second rutherfordine sample studied contained only hydroxyl ions in the equatorial uranyl plane and did not contain any molecular water. The infrared spectra of the (CO3)2- units in the antisymmetric stretching region show complexity with three sets of carbonate bands observed. This combined with the observation of multiple bands in the (CO3)2- bending region in both the Raman and IR spectra suggests that both monodentate and bidentate (CO3)2- units may be present in the structure. This cannot be exactly proved and inferred from the spectra; however, it is in accordance with the X-ray crystallographic studies. Complexity is also observed in the IR spectra of (UO2)2+ antisymmetric stretching region and is attributed to non-identical UO bonds. U-O bond lengths were calculated using wavenumbers of the 3 and 1 (UO2)2+ and compared with data from X-ray single crystal structure analysis of rutherfordine. Existence of solid solution having a general formula (UO2)(CO3)1-x(OH)2x.yH2O ( x, y  0) is supported in the crystal structure of rutherfordine samples studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the 1980s, industries and researchers have sought to better understand the quality of services due to the rise in their importance (Brogowicz, Delene and Lyth 1990). More recent developments with online services, coupled with growing recognition of service quality (SQ) as a key contributor to national economies and as an increasingly important competitive differentiator, amplify the need to revisit our understanding of SQ and its measurement. Although ‘SQ’ can be broadly defined as “a global overarching judgment or attitude relating to the overall excellence or superiority of a service” (Parasuraman, Berry and Zeithaml 1988), the term has many interpretations. There has been considerable progress on how to measure SQ perceptions, but little consensus has been achieved on what should be measured. There is agreement that SQ is multi-dimensional, but little agreement as to the nature or content of these dimensions (Brady and Cronin 2001). For example, within the banking sector, there exist multiple SQ models, each consisting of varying dimensions. The existence of multiple conceptions and the lack of a unifying theory bring the credibility of existing conceptions into question, and beg the question of whether it is possible at some higher level to define SQ broadly such that it spans all service types and industries. This research aims to explore the viability of a universal conception of SQ, primarily through a careful re-visitation of the services and SQ literature. The study analyses the strengths and weaknesses of the highly regarded and widely used global SQ model (SERVQUAL) which reflects a single-level approach to SQ measurement. The SERVQUAL model states that customers evaluate SQ (of each service encounter) based on five dimensions namely reliability, assurance, tangibles, empathy and responsibility. SERVQUAL, however, failed to address what needs to be reliable, assured, tangible, empathetic and responsible. This research also addresses a more recent global SQ model from Brady and Cronin (2001); the B&C (2001) model, that has potential to be the successor of SERVQUAL in that it encompasses other global SQ models and addresses the ‘what’ questions that SERVQUAL didn’t. The B&C (2001) model conceives SQ as being multidimensional and multi-level; this hierarchical approach to SQ measurement better reflecting human perceptions. In-line with the initial intention of SERVQUAL, which was developed to be generalizable across industries and service types, this research aims to develop a conceptual understanding of SQ, via literature and reflection, that encompasses the content/nature of factors related to SQ; and addresses the benefits and weaknesses of various SQ measurement approaches (i.e. disconfirmation versus perceptions-only). Such understanding of SQ seeks to transcend industries and service types with the intention of extending our knowledge of SQ and assisting practitioners in understanding and evaluating SQ. The candidate’s research has been conducted within, and seeks to contribute to, the ‘IS-Impact’ research track of the IT Professional Services (ITPS) Research Program at QUT. The vision of the track is “to develop the most widely employed model for benchmarking Information Systems in organizations for the joint benefit of research and practice.” The ‘IS-Impact’ research track has developed an Information Systems (IS) success measurement model, the IS-Impact Model (Gable, Sedera and Chan 2008), which seeks to fulfill the track’s vision. Results of this study will help future researchers in the ‘IS-Impact’ research track address questions such as: • Is SQ an antecedent or consequence of the IS-Impact model or both? • Has SQ already been addressed by existing measures of the IS-Impact model? • Is SQ a separate, new dimension of the IS-Impact model? • Is SQ an alternative conception of the IS? Results from the candidate’s research suggest that SQ dimensions can be classified at a higher level which is encompassed by the B&C (2001) model’s 3 primary dimensions (interaction, physical environment and outcome). The candidate also notes that it might be viable to re-word the ‘physical environment quality’ primary dimension to ‘environment quality’ so as to better encompass both physical and virtual scenarios (E.g: web sites). The candidate does not rule out the global feasibility of the B&C (2001) model’s nine sub-dimensions, however, acknowledges that more work has to be done to better define the sub-dimensions. The candidate observes that the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions are supportive representations of the ‘interaction’, physical environment’ and ‘outcome’ primary dimensions respectively. The latter statement suggests that customers evaluate each primary dimension (or each higher level of SQ classification) namely ‘interaction’, physical environment’ and ‘outcome’ based on the ‘expertise’, ‘design’ and ‘valence’ sub-dimensions respectively. The ability to classify SQ dimensions at a higher level coupled with support for the measures that make up this higher level, leads the candidate to propose the B&C (2001) model as a unifying theory that acts as a starting point to measuring SQ and the SQ of IS. The candidate also notes, in parallel with the continuing validation and generalization of the IS-Impact model, that there is value in alternatively conceptualizing the IS as a ‘service’ and ultimately triangulating measures of IS SQ with the IS-Impact model. These further efforts are beyond the scope of the candidate’s study. Results from the candidate’s research also suggest that both the disconfirmation and perceptions-only approaches have their merits and the choice of approach would depend on the objective(s) of the study. Should the objective(s) be an overall evaluation of SQ, the perceptions-only approached is more appropriate as this approach is more straightforward and reduces administrative overheads in the process. However, should the objective(s) be to identify SQ gaps (shortfalls), the (measured) disconfirmation approach is more appropriate as this approach has the ability to identify areas that need improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION In their target article, Yuri Hanin and Muza Hanina outlined a novel multidisciplinary approach to performance optimisation for sport psychologists called the Identification-Control-Correction (ICC) programme. According to the authors, this empirically-verified, psycho-pedagogical strategy is designed to improve the quality of coaching and consistency of performance in highly skilled athletes and involves a number of steps including: (i) identifying and increasing self-awareness ofoptimal’ and ‘non-optimal’ movement patterns for individual athletes; (ii) learning to deliberately control the process of task execution; and iii), correcting habitual and random errors and managing radical changes of movement patterns. Although no specific examples were provided, the ICC programme has apparently been successful in enhancing the performance of Olympic-level athletes. In this commentary, we address what we consider to be some important issues arising from the target article. We specifically focus attention on the contentious topic of optimization in neurobiological movement systems, the role of constraints in shaping emergent movement patterns and the functional role of movement variability in producing stable performance outcomes. In our view, the target article and, indeed, the proposed ICC programme, would benefit from a dynamical systems theoretical backdrop rather than the cognitive scientific approach that appears to be advocated. Although Hanin and Hanina made reference to, and attempted to integrate, constructs typically associated with dynamical systems theoretical accounts of motor control and learning (e.g., Bernstein’s problem, movement variability, etc.), these ideas required more detailed elaboration, which we provide in this commentary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here we search for evidence of the existence of a sub-chondritic 142Nd/144Nd reservoir that balances the Nd isotope chemistry of the Earth relative to chondrites. If present, it may reside in the source region of deeply sourced mantle plume material. We suggest that lavas from Hawai’i with coupled elevations in 186Os/188Os and 187Os/188Os, from Iceland that represent mixing of upper mantle and lower mantle components, and from Gough with sub-chondritic 143Nd/144Nd and high 207Pb/206Pb, are favorable samples that could reflect mantle sources that have interacted with an Early-Enriched Reservoir (EER) with sub-chondritic 142Nd/144Nd. High-precision Nd isotope analyses of basalts from Hawai’i, Iceland and Gough demonstrate no discernable 142Nd/144Nd deviation from terrestrial standards. These data are consistent with previous high-precision Nd isotope analysis of recent mantle-derived samples and demonstrate that no mantle-derived material to date provides evidence for the existence of an EER in the mantle. We then evaluate mass balance in the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd. The Nd isotope systematics of EERs are modeled for different sizes and timing of formation relative to ε143Nd estimates of the reservoirs in the μ142Nd = 0 Earth, where μ142Nd is ((measured 142Nd/144Nd/terrestrial standard 142Nd/144Nd)−1 * 10−6) and the μ142Nd = 0 Earth is the proportion of the silicate Earth with 142Nd/144Nd indistinguishable from the terrestrial standard. The models indicate that it is not possible to balance the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd unless the μ142Nd = 0 Earth has a ε143Nd within error of the present-day Depleted Mid-ocean ridge basalt Mantle source (DMM). The 4567 Myr age 142Nd–143Nd isochron for the Earth intersects μ142Nd = 0 at ε143Nd of +8 ± 2 providing a minimum ε143Nd for the μ142Nd = 0 Earth. The high ε143Nd of the μ142Nd = 0 Earth is confirmed by the Nd isotope systematics of Archean mantle-derived rocks that consistently have positive ε143Nd. If the EER formed early after solar system formation (0–70 Ma) continental crust and DMM can be complementary reservoirs with respect to Nd isotopes, with no requirement for significant additional reservoirs. If the EER formed after 70 Ma then the μ142Nd = 0 Earth must have a bulk ε143Nd more radiogenic than DMM and additional high ε143Nd material is required to balance the Nd isotope systematics of the Earth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While there is substantial research on attitudinal and behavioral loyalty, the deconstruction of attitudinal loyalty into its two key components – emotional and cognitive loyalty – has been largely ignored. Despite the existence of managerial strategies aimed at increasing each of these two components, there is little academic research to support these managerial efforts. This paper seeks to advance the understanding of emotional and cognitive brand loyalty by examining the psychological function that these dimensions of brand loyalty perform for the consumer. We employ Katz’s (1960) four functions of attitudes (utilitarian, knowledge, value-expression, ego-defence) to investigate this question. Surveys using a convenience sample were completed by 268 consumers in two metropolitan cities on a variety of goods, services and durable products. The relationship between the functions and dimensions of loyalty were examined using MANOVA. The results show that both the utilitarian and knowledge functions of loyalty are significantly positively related to cognitive loyalty while the ego-defensive function of loyalty is significantly positively related to emotional loyalty. The results for the value-expressive function of loyalty were nonsignificant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon ofoptimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decline in the frequency of potent mesenchymal stem cells (MSCs) has been implicated in ageing and degenerative diseases. Increasing the circulating stem cell population can lead to renewed recruitment of these potent cells at sites of damage. Therefore, identifying the ideal cells for ex vivo expansion will form a major pursuit of clinical applications. This study is a follow-up of previous work that demonstrated the occurrence of fast-growing multipotential cells from the bone marrow samples. To investigate the molecular processes involved in the existence of such varying populations, gene expression studies were performed between fast- and slow-growing clonal populations to identify potential genetic markers associated with stemness using the quantitative real-time polymerase chain reaction comprising a series of 84 genes related to stem cell pathways. A group of 10 genes were commonly overrepresented in the fast-growing stem cell clones. These included genes that encode proteins involved in the maintenance of embryonic and neural stem cell renewal (sex-determining region Y-box 2, notch homolog 1, and delta-like 3), proteins associated with chondrogenesis (aggrecan and collagen 2 A1), growth factors (bone morphogenetic protein 2 and insulin-like growth factor 1), an endodermal organogenesis protein (forkhead box a2), and proteins associated with cell-fate specification (fibroblast growth factor 2 and cell division cycle 2). Expression of diverse differentiation genes in MSC clones suggests that these commonly expressed genes may confer the maintenance of multipotentiality and self-renewal of MSCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Altered mechanical properties of the heel pad have been implicated in the development of plantar heel pain. However, the in vivo properties of the heel pad during gait remain largely unexplored in this cohort. The aim of the current study was to characterise the bulk compressive properties of the heel pad in individuals with and without plantar heel pain while walking. ---------- Methods: The sagittal thickness and axial compressive strain of the heel pad were estimated in vivo from dynamic lateral foot radiographs acquired from nine subjects with unilateral plantar heel pain and an equivalent number of matched controls, while walking at their preferred speed. Compressive stress was derived from simultaneously acquired plantar pressure data. Principal viscoelastic parameters of the heel pad, including peak strain, secant modulus and energy dissipation (hysteresis), were estimated from subsequent stress–strain curves.---------- Findings: There was no significant difference in loaded and unloaded heel pad thickness, peak stress, peak strain, or secant and tangent modulus in subjects with and without heel pain. However, the fat pad of symptomatic feet had a significantly lower energy dissipation ratio (0.55 ± 0.17 vs. 0.69 ± 0.08) when compared to asymptomatic feet (P < .05).---------- Interpretation: Plantar heel pain is characterised by reduced energy dissipation ratio of the heel pad when measured in vivo and under physiologically relevant strain rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reforms to the national research and research training system by the Commonwealth Government of Australia sought to effectively connect research conducted in universities to Australia's national innovation system. Research training has a key role in ensuring an adequate supply of highly skilled people for the national innovation system. During their studies, research students produce and disseminate a massive amount of new knowledge. Prior to this study, there was no research that examined the contribution of research training to Australia's national innovation system despite the existence of policy initiatives aiming to enhance this contribution. Given Australia's below average (but improving) innovation performance compared to other OECD countries, the inclusion of Finland and the United States provided further insights into the key research question. This study examined three obvious ways that research training contributes to the national innovation systems in the three countries: the international mobility and migration of research students and graduates, knowledge production and distribution by research students, and the impact of research training as advanced human capital formation on economic growth. Findings have informed the concept of a research training culture of innovation that aims to enhance the contribution of research training to Australia's national innovation system. Key features include internationally competitive research and research training environments; research training programs that equip students with economically-relevant knowledge and the capabilities required by employers operating in knowledge-based economies; attractive research careers in different sectors; a national commitment to R&D as indicated by high levels of gross and business R&D expenditure; high private and social rates of return from research training; and the horizontal coordination of key organisations that create policy for, and/or invest in research training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ongoing financial, environmental and political adjustments have shifted the role of large international airports. Many airports are expanding from a narrow concentration on operating as transportation centres to becoming economic hubs. By working together, airports and other industry sectors can contribute to and facilitate not only economic prosperity, but create social advantage for local and regional areas in new ways. This transformation of the function and orientation of airports has been termed the aerotropolis or airport metropolis, where the airport is recognised as an economic centre with land uses that link local and global markets. This chapter contends that the conversion of an airport into a sustainable airport metropolis requires more than just industry clustering and the existence of hard physical infrastructure. Attention must also be directed to the creation and on-going development of social infrastructure within proximate areas and the maximisation of connectivity flows within and between infrastructure elements. It concludes that the establishment of an interactive and interdependent infrastructure trilogy of hard, soft and social infrastructures provides the necessary balance to the airport metropolis to ensure sustainable development. This chapter provides the start of an operating framework to integrate and harness the infrastructure trilogy to enable the achievement of optimal and sustainable social and economic advantage from airport cities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the study of student learning literature, the traditional view holds that when students are faced with heavy workload, poor teaching, and content that they cannot relate to – important aspects of the learning context, they will more likely utilise the surface approach to learning due to stresses, lack of understanding and lack of perceived relevance of the content (Kreber, 2003; Lizzio, Wilson, & Simons, 2002; Ramdsen, 1989; Ramsden, 1992; Trigwell & Prosser, 1991; Vermunt, 2005). For example, in studies involving health and medical sciences students, courses that utilised student-centred, problem-based approaches to teaching and learning were found to elicit a deeper approach to learning than the teacher-centred, transmissive approach (Patel, Groen, & Norman, 1991; Sadlo & Richardson, 2003). It is generally accepted that the line of causation runs from the learning context (or rather students’ self reported data on the learning context) to students’ learning approaches. That is, it is the learning context as revealed by students’ self-reported data that elicit the associated learning behaviour. However, other research studies also found that the same teaching and learning environment can be perceived differently by different students. In a study of students’ perceptions of assessment requirements, Sambell and McDowell (1998) found that students “are active in the reconstruction of the messages and meanings of assessment” (p. 391), and their interpretations are greatly influenced by their past experiences and motivations. In a qualitative study of Hong Kong tertiary students, Kember (2004) found that students using the surface learning approach reported heavier workload than students using the deep learning approach. According to Kember if students learn by extracting meanings from the content and making connections, they will more likely see the higher order intentions embodied in the content and the high cognitive abilities being assessed. On the other hand, if they rote-learn for the graded task, they fail to see the hierarchical relationship in the content and to connect the information. These rote-learners will tend to see the assessment as requiring memorising and regurgitation of a large amount of unconnected knowledge, which explains why they experience a high workload. Kember (2004) thus postulate that it is the learning approach that influences how students perceive workload. Campbell and her colleagues made a similar observation in their interview study of secondary students’ perceptions of teaching in the same classroom (Campbell et al., 2001). The above discussions suggest that students’ learning approaches can influence their perceptions of assessment demands and other aspects of the learning context such as relevance of content and teaching effectiveness. In other words, perceptions of elements in the teaching and learning context are endogenously determined. This study attempted to investigate the causal relationships at the individual level between learning approaches and perceptions of the learning context in economics education. In this study, students’ learning approaches and their perceptions of the learning context were measured. The elements of the learning context investigated include: teaching effectiveness, workload and content. The authors are aware of existence of other elements of the learning context, such as generic skills, goal clarity and career preparation. These aspects, however, were not within the scope of this present study and were therefore not investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contemporary urban form, particularly in the cities of South Africa, lacks distinction and quality. The majority of developments are conceived as private and dislocated initiatives, surveiled enclaves with gated access being the only conduit to the outside world. Any concern for a positive contribution to the matrix of public activity is seldom a consideration. The urban form responds to the perception that traffic systems are paramount to the successful flux of the city in satisfying the escalating demands of vehicular movement. In contrast many of the urban centres around the world, the great historical centres of Europe, Americas and the Sub-Continent are admired and considered the ultimate models in urban experience. The colonnades, bazaars and boulevards hosting an abundance of street activity are the characteristics of such centres and are symptomatic of a city growth based on pedestrian movement patterns, an urbanism supportative of human interaction and exchange, a form which has nurtured the existence of a public realm. Through the understanding of the principles of traditional urbanism we may learn that the modernist paradigm of a contemporary suburbia has resulted in disconnected and separate land uses with isolated districts where a reliance on the car is essential rather than optional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents advanced optimization techniques for Mission Path Planning (MPP) of a UAS fitted with a spore trap to detect and monitor spores and plant pathogens. The UAV MPP aims to optimise the mission path planning search and monitoring of spores and plant pathogens that may allow the agricultural sector to be more competitive and more reliable. The UAV will be fitted with an air sampling or spore trap to detect and monitor spores and plant pathogens in remote areas not accessible to current stationary monitor methods. The optimal paths are computed using a Multi-Objective Evolutionary Algorithms (MOEAs). Two types of multi-objective optimisers are compared; the MOEA Non-dominated Sorting Genetic Algorithms II (NSGA-II) and Hybrid Game are implemented to produce a set of optimal collision-free trajectories in three-dimensional environment. The trajectories on a three-dimension terrain, which are generated off-line, are collision-free and are represented by using Bézier spline curves from start position to target and then target to start position or different position with altitude constraints. The efficiency of the two optimization methods is compared in terms of computational cost and design quality. Numerical results show the benefits of coupling a Hybrid-Game strategy to a MOEA for MPP tasks. The reduction of numerical cost is an important point as the faster the algorithm converges the better the algorithms is for an off-line design and for future on-line decisions of the UAV.