958 resultados para residency positions
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
The closure of large institutions for people with intellectual disability and the subsequent shift to community living has been a feature of social policies in most western democracies for more than two decades. While the move from congregated settings to homes in the community has been heralded as a positive and desirable strategy, deinstitutionalisation has continued to be a controversial policy and practice. This research critically analyses the implementation of a deinstitutionalisation policy called Institutional Reform in the state of Queensland from May 1994 until it was dismantled under a new government in the middle of 1996. A trajectory study of the policy from early conceptualisation through its development, implementation and final extinction was undertaken. Several methods were utilised in the research including the textual analyis of policy documents, discussion papers and newspaper articles, interviews with stakeholders and participant observation. The research draws on theories of discourse and focuses on how discourses of disability shape policy and practice. The thesis outlines a number of implications for policy implementation more generally as well as for disability services. In particular, the theoretical framework builds on Fulcher's (1989) disabling discourses - medical, charity, lay and rights - and identifies two additional discourses of economics and inclusion. The thesis argues that competing disability discourses operated in powerful ways to shape the implementation of the policy and illustrates how older discourses based on fear and prejudice were promoted to positions of dominance and power.
Resumo:
This thesis is a problematisation of the teaching of art to young children. To problematise a domain of social endeavour, is, in Michel Foucault's terms, to ask how we come to believe that "something ... can and must be thought" (Foucault, 1985:7). The aim is to document what counts (i.e., what is sayable, thinkable, feelable) as proper art teaching in Queensland at this point ofhistorical time. In this sense, the thesis is a departure from more recognisable research on 'more effective' teaching, including critical studies of art teaching and early childhood teaching. It treats 'good teaching' as an effect of moral training made possible through disciplinary discourses organised around certain epistemic rules at a particular place and time. There are four key tasks accomplished within the thesis. The first is to describe an event which is not easily resolved by means of orthodox theories or explanations, either liberal-humanist or critical ones. The second is to indicate how poststructuralist understandings of the self and social practice enable fresh engagements with uneasy pedagogical moments. What follows this discussion is the documentation of an empirical investigation that was made into texts generated by early childhood teachers, artists and parents about what constitutes 'good practice' in art teaching. Twenty-two participants produced text to tell and re-tell the meaning of 'proper' art education, from different subject positions. Rather than attempting to capture 'typical' representations of art education in the early years, a pool of 'exemplary' teachers, artists and parents were chosen, using "purposeful sampling", and from this pool, three videos were filmed and later discussed by the audience of participants. The fourth aspect of the thesis involves developing a means of analysing these texts in such a way as to allow a 're-description' of the field of art teaching by attempting to foreground the epistemic rules through which such teacher-generated texts come to count as true ie, as propriety in art pedagogy. This analysis drew on Donna Haraway's (1995) understanding of 'ironic' categorisation to hold the tensions within the propositions inside the categories of analysis rather than setting these up as discursive oppositions. The analysis is therefore ironic in the sense that Richard Rorty (1989) understands the term to apply to social scientific research. Three 'ironic' categories were argued to inform the discursive construction of 'proper' art teaching. It is argued that a teacher should (a) Teach without teaching; (b) Manufacture the natural; and (c) Train for creativity. These ironic categories work to undo modernist assumptions about theory/practice gaps and finding a 'balance' between oppositional binary terms. They were produced through a discourse theoretical reading of the texts generated by the participants in the study, texts that these same individuals use as a means of discipline and self-training as they work to teach properly. In arguing the usefulness of such approaches to empirical data analysis, the thesis challenges early childhood research in arts education, in relation to its capacity to deal with ambiguity and to acknowledge contradiction in the work of teachers and in their explanations for what they do. It works as a challenge at a range of levels - at the level of theorising, of method and of analysis. In opening up thinking about normalised categories, and questioning traditional Western philosophy and the grand narratives of early childhood art pedagogy, it makes a space for re-thinking art pedagogy as "a game oftruth and error" (Foucault, 1985). In doing so, it opens up a space for thinking how art education might be otherwise.
Resumo:
Following the position of Beer and Burrows (2007) this paper poses a re-conceptualization of Web 2.0 interaction in order to understand the properties of action possibilities in and of Web 2.0. The paper discusses the positioning of Web 2.0 social interaction in light of current descriptions, which point toward the capacities of technology in the production of social affordances within that domain (Bruns 2007; Jenkins 2006; O’Reilly 2005). While this diminishes the agency and reflexivity for users of Web 2.0 it also inadvertently positions tools as the central driver for the interactive potential available (Everitt and Mills 2009; van Dicjk 2009). In doing so it neglects the possibility that participants may be more involved in the production of Web 2.0 than the technology that underwrites it. It is this aspect of Web 2.0 that is questioned in the study with particular interest on how an analytical option may be made available to broaden the scope of investigations into Web 2.0 to include a study of the capacity for an interactive potential in light of how action possibilities are presented to users through communication with others (Bonderup Dohn 2009).
Resumo:
Developmental progression and differentiation of distinct cell types depend on the regulation of gene expression in space and time. Tools that allow spatial and temporal control of gene expression are crucial for the accurate elucidation of gene function. Most systems to manipulate gene expression allow control of only one factor, space or time, and currently available systems that control both temporal and spatial expression of genes have their limitations. We have developed a versatile two-component system that overcomes these limitations, providing reliable, conditional gene activation in restricted tissues or cell types. This system allows conditional tissue-specific ectopic gene expression and provides a tool for conditional cell type- or tissue-specific complementation of mutants. The chimeric transcription factor XVE, in conjunction with Gateway recombination cloning technology, was used to generate a tractable system that can efficiently and faithfully activate target genes in a variety of cell types. Six promoters/enhancers, each with different tissue specificities (including vascular tissue, trichomes, root, and reproductive cell types), were used in activation constructs to generate different expression patterns of XVE. Conditional transactivation of reporter genes was achieved in a predictable, tissue-specific pattern of expression, following the insertion of the activator or the responder T-DNA in a wide variety of positions in the genome. Expression patterns were faithfully replicated in independent transgenic plant lines. Results demonstrate that we can also induce mutant phenotypes using conditional ectopic gene expression. One of these mutant phenotypes could not have been identified using noninducible ectopic gene expression approaches.
Resumo:
Aim: This paper is a report of a study of variations in the pattern of nurse practitioner work in a range of service fields and geographical locations, across direct patient care, indirect patient care and service-related activities. Background. The nurse practitioner role has been implemented internationally as a service reform model to improve the access and timeliness of health care. There is a substantial body of research into the nurse practitioner role and service outcomes, but scant information on the pattern of nurse practitioner work and how this is influenced by different service models. --------- Methods: We used work sampling methods. Data were collected between July 2008 and January 2009. Observations were recorded from a random sample of 30 nurse practitioners at 10-minute intervals in 2-hour blocks randomly generated to cover two weeks of work time from a sampling frame of six weeks. --------- Results: A total of 12,189 individual observations were conducted with nurse practitioners across Australia. Thirty individual activities were identified as describing nurse practitioner work, and these were distributed across three categories. Direct care accounted for 36.1% of how nurse practitioners spend their time, indirect care accounted for 32.2% and service-related activities made up 31.9%. --------- Conclusion. These findings provide useful baseline data for evaluation of nurse practitioner positions and the service effect of these positions. However, the study also raises questions about the best use of nurse practitioner time and the influences of barriers to and facilitators of this model of service innovation.
Resumo:
Bearing damage in modern inverter-fed AC drive systems is more common than in motors working with 50 or 60 Hz power supply. Fast switching transients and common mode voltage generated by a PWM inverter cause unwanted shaft voltage and resultant bearing currents. Parasitic capacitive coupling creates a path to discharge current in rotors and bearings. In order to analyze bearing current discharges and their effect on bearing damage under different conditions, calculation of the capacitive coupling between the outer and inner races is needed. During motor operation, the distances between the balls and races may change the capacitance values. Due to changing of the thickness and spatial distribution of the lubricating grease, this capacitance does not have a constant value and is known to change with speed and load. Thus, the resultant electric field between the races and balls varies with motor speed. The lubricating grease in the ball bearing cannot withstand high voltages and a short circuit through the lubricated grease can occur. At low speeds, because of gravity, balls and shaft voltage may shift down and the system (ball positions and shaft) will be asymmetric. In this study, two different asymmetric cases (asymmetric ball position, asymmetric shaft position) are analyzed and the results are compared with the symmetric case. The objective of this paper is to calculate the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.
Resumo:
This thesis addresses the problem of detecting and describing the same scene points in different wide-angle images taken by the same camera at different viewpoints. This is a core competency of many vision-based localisation tasks including visual odometry and visual place recognition. Wide-angle cameras have a large field of view that can exceed a full hemisphere, and the images they produce contain severe radial distortion. When compared to traditional narrow field of view perspective cameras, more accurate estimates of camera egomotion can be found using the images obtained with wide-angle cameras. The ability to accurately estimate camera egomotion is a fundamental primitive of visual odometry, and this is one of the reasons for the increased popularity in the use of wide-angle cameras for this task. Their large field of view also enables them to capture images of the same regions in a scene taken at very different viewpoints, and this makes them suited for visual place recognition. However, the ability to estimate the camera egomotion and recognise the same scene in two different images is dependent on the ability to reliably detect and describe the same scene points, or ‘keypoints’, in the images. Most algorithms used for this purpose are designed almost exclusively for perspective images. Applying algorithms designed for perspective images directly to wide-angle images is problematic as no account is made for the image distortion. The primary contribution of this thesis is the development of two novel keypoint detectors, and a method of keypoint description, designed for wide-angle images. Both reformulate the Scale- Invariant Feature Transform (SIFT) as an image processing operation on the sphere. As the image captured by any central projection wide-angle camera can be mapped to the sphere, applying these variants to an image on the sphere enables keypoints to be detected in a manner that is invariant to image distortion. Each of the variants is required to find the scale-space representation of an image on the sphere, and they differ in the approaches they used to do this. Extensive experiments using real and synthetically generated wide-angle images are used to validate the two new keypoint detectors and the method of keypoint description. The best of these two new keypoint detectors is applied to vision based localisation tasks including visual odometry and visual place recognition using outdoor wide-angle image sequences. As part of this work, the effect of keypoint coordinate selection on the accuracy of egomotion estimates using the Direct Linear Transform (DLT) is investigated, and a simple weighting scheme is proposed which attempts to account for the uncertainty of keypoint positions during detection. A word reliability metric is also developed for use within a visual ‘bag of words’ approach to place recognition.
Resumo:
Aim: There are issues surrounding the apparent decline and devaluing of cooking skills in the population, potential health impacts and the role of dietitians. The present paper aims to outline several arguments and raise questions on the relationship between cooking and dietetics.---------- Methods: Evidence from dietetics and nutrition journals and other sources is used to develop positions for dietetics and its relationship to cooking and cooking skills. ---------- Results: The historical relationship between dietetics and home economics has seen dietetics professionally distance itself by its scientific education on food and nutrition, rather than actual involvement with cooking. In pursuing this rational scientific approach there are concerns that dietitians have inadvertently supported the growth of the functional and convenience food market, particularly given the demise of home economics as a skill-based curricula in schools in several states. There is a need to consider what role cooking skills could have in dietetics training as a professional competency for practice, particularly for public health interventions. This is in the light of Commonwealth government funding that is legitimising cooking skill interventions as a policy response to obesity. There may be a role for dietitians to develop partnerships and train a new professional category or paraprofessionals and/or peer educators to deliver cooking skill interventions. ---------- Conclusion: There is a need for research on dietitian's views and use of cooking skill interventions. This would help answer whether we should consider cooking and cooking skills as part of our professional practice and whether cooking should be a dietetic competency.
Resumo:
Nonlinear filter generators are common components used in the keystream generators for stream ciphers and more recently for authentication mechanisms. They consist of a Linear Feedback Shift Register (LFSR) and a nonlinear Boolean function to mask the linearity of the LFSR output. Properties of the output of a nonlinear filter are not well studied. Anderson noted that the m-tuple output of a nonlinear filter with consecutive taps to the filter function is unevenly distributed. Current designs use taps which are not consecutive. We examine m-tuple outputs from nonlinear filter generators constructed using various LFSRs and Boolean functions for both consecutive and uneven (full positive difference sets where possible) tap positions. The investigation reveals that in both cases, the m-tuple output is not uniform. However, consecutive tap positions result in a more biased distribution than uneven tap positions, with some m-tuples not occurring at all. These biased distributions indicate a potential flaw that could be exploited for cryptanalysis
Resumo:
What is a record producer? There is a degree of mystery and uncertainty about just what goes on behind the studio door. Some producers are seen as Svengali-like figures manipulating artists into mass consumer product. Producers are sometimes seen as mere technicians whose job is simply to set up a few microphones and press the record button. Close examination of the recording process will show how far this is from a complete picture. Artists are special—they come with an inspiration, and a talent, but also with a variety of complications, and in many ways a recording studio can seem the least likely place for creative expression and for an affective performance to happen. The task of the record producer is to engage with these artists and their songs and turn these potentials into form through the technology of the recording studio. The purpose of the exercise is to disseminate this fixed form to an imagined audience—generally in the hope that this audience will prove to be real. Finding an audience is the role of the record company. A record producer must also engage with the commercial expectations of the interests that underwrite a recording. This dissertation considers three fields of interest in the recording process: the performer and the song; the technology of the recording context; and the commercial ambitions of the record company—and positions the record producer as a nexus at the interface of all three. The author reports his structured recollection of five recordings, with three different artists, that all achieved substantial commercial success. The processes are considered from the author’s perspective as the record producer, and from inception of the project to completion of the recorded work. What were the processes of engagement? Do the actions reported conform to the template of nexus? This dissertation proposes that in all recordings the function of producer/nexus is present and necessary—it exists in the interaction of the artistry and the technology. The art of record production is to engage with these artists and the songs they bring and turn these potentials into form.
Resumo:
AC motors are largely used in a wide range of modern systems, from household appliances to automated industry applications such as: ventilations systems, fans, pumps, conveyors and machine tool drives. Inverters are widely used in industrial and commercial applications due to the growing need for speed control in ASD systems. Fast switching transients and the common mode voltage, in interaction with parasitic capacitive couplings, may cause many unwanted problems in the ASD applications. These include shaft voltage and leakage currents. One of the inherent characteristics of Pulse Width Modulation (PWM) techniques is the generation of the common mode voltage, which is defined as the voltage between the electrical neutral of the inverter output and the ground. Shaft voltage can cause bearing currents when it exceeds the amount of breakdown voltage level of the thin lubricant film between the inner and outer rings of the bearing. This phenomenon is the main reason for early bearing failures. A rapid development in power switches technology has lead to a drastic decrement of switching rise and fall times. Because there is considerable capacitance between the stator windings and the frame, there can be a significant capacitive current (ground current escaping to earth through stray capacitors inside a motor) if the common mode voltage has high frequency components. This current leads to noises and Electromagnetic Interferences (EMI) issues in motor drive systems. These problems have been dealt with using a variety of methods which have been reported in the literature. However, cost and maintenance issues have prevented these methods from being widely accepted. Extra cost or rating of the inverter switches is usually the price to pay for such approaches. Thus, the determination of cost-effective techniques for shaft and common mode voltage reduction in ASD systems, with the focus on the first step of the design process, is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. Electrical power generation from renewable energy sources, such as wind energy systems, has become a crucial issue because of environmental problems and a predicted future shortage of traditional energy sources. Thus, Chapter 2 focuses on the shaft voltage analysis of stator-fed induction generators (IG) and Doubly Fed Induction Generators DFIGs in wind turbine applications. This shaft voltage analysis includes: topologies, high frequency modelling, calculation and mitigation techniques. A back-to-back AC-DC-AC converter is investigated in terms of shaft voltage generation in a DFIG. Different topologies of LC filter placement are analysed in an effort to eliminate the shaft voltage. Different capacitive couplings exist in the motor/generator structure and any change in design parameters affects the capacitive couplings. Thus, an appropriate design for AC motors should lead to the smallest possible shaft voltage. Calculation of the shaft voltage based on different capacitive couplings, and an investigation of the effects of different design parameters are discussed in Chapter 3. This is achieved through 2-D and 3-D finite element simulation and experimental analysis. End-winding parameters of the motor are also effective factors in the calculation of the shaft voltage and have not been taken into account in previous reported studies. Calculation of the end-winding capacitances is rather complex because of the diversity of end winding shapes and the complexity of their geometry. A comprehensive analysis of these capacitances has been carried out with 3-D finite element simulations and experimental studies to determine their effective design parameters. These are documented in Chapter 4. Results of this analysis show that, by choosing appropriate design parameters, it is possible to decrease the shaft voltage and resultant bearing current in the primary stage of generator/motor design without using any additional active and passive filter-based techniques. The common mode voltage is defined by a switching pattern and, by using the appropriate pattern; the common mode voltage level can be controlled. Therefore, any PWM pattern which eliminates or minimizes the common mode voltage will be an effective shaft voltage reduction technique. Thus, common mode voltage reduction of a three-phase AC motor supplied with a single-phase diode rectifier is the focus of Chapter 5. The proposed strategy is mainly based on proper utilization of the zero vectors. Multilevel inverters are also used in ASD systems which have more voltage levels and switching states, and can provide more possibilities to reduce common mode voltage. A description of common mode voltage of multilevel inverters is investigated in Chapter 6. Chapter 7 investigates the elimination techniques of the shaft voltage in a DFIG based on the methods presented in the literature by the use of simulation results. However, it could be shown that every solution to reduce the shaft voltage in DFIG systems has its own characteristics, and these have to be taken into account in determining the most effective strategy. Calculation of the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions is discussed in Chapter 8. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.
Resumo:
Network has emerged from a contempory worldwide phenomenon, culturally manifested as a consequence of globalization and the knowledge economy. It is in this context that the internet revolution has prompted a radical re-ordering of social and institutional relations and the associated structures, processes and places which support them. Within the duality of virtual space and the augmentation of traditional notions of physical place, the organizational structures pose new challenges for the design professions. Technological developments increasingly permit communication anytime and anywhere, and provide the opportunity for both synchronous and asynchronous collaboration. The resultant ecology formed through the network enterprise has resulted in an often convolted and complex world wherein designers are forced to consider the relevance and meaning of this new context. The role of technology and that of space are thus interwined in the relation between the network and the individual workplace. This paper explores a way to inform the interior desgn process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an Australia-wide case study of three collaborating, yet independent business entities. It further suggests the link between workplace design and successful business innovation being realized between partnering organizations in Great Britain. Evidence presented indicates that, for architects and interior designers, the scope of the problem has widened, the depth of knowledge required to provide solutions has increased, and the rules of engagement are required to change. The ontological and epistemological positions adopted in the study enabled the spatial dimensions to be examined from both within and beyond the confines of a traditional design only viewpoint. Importantly it highlights the significance of a trans-disiplinary collaboration in dealing with the multiple layers and complexity of the contemporary social and business world, from both a research and practice perspective.
Resumo:
It is widely contended that we live in a „world risk society‟, where risk plays a central and ubiquitous role in contemporary social life. A seminal contributor to this view is Ulrich Beck, who claims that our world is governed by dangers that cannot be calculated or insured against. For Beck, risk is an inherently unrestrained phenomenon, emerging from a core and pouring out from and under national borders, unaffected by state power. Beck‟s focus on risk's ubiquity and uncontrollability at an infra-global level means that there is a necessary evenness to the expanse of risk: a "universalization of hazards‟, which possess an inbuilt tendency towards globalisation. While sociological scholarship has examined the reach and impact of globalisation processes on the role and power of states, Beck‟s argument that economic risk is without territory and resistant to domestic policy has come under less appraisal. This is contestable: what are often described as global economic processes, on closer inspection, reveal degrees of territorial embeddedness. This not only suggests that "global‟ flows could sometimes be more appropriately explained as international, regional or even local processes, formed from and responsive to state strategies – but also demonstrates what can be missed if we overinflate the global. This paper briefly introduces two key principles of Beck's theory of risk society and positions them within a review of literature debating the novelty and degree of global economic integration and its impact on states pursuing domestic economic policies. In doing so, this paper highlights the value for future research to engage with questions such as "is economic risk really without territory‟ and "does risk produce convergence‟, not so much as a means of reducing Beck's thesis to a purely empirical analysis, but rather to avoid limiting our scope in understanding the complex relationship between risk and state.
Resumo:
The chapter investigates Shock Control Bumps (SCB) on a Natural Laminar Flow (NLF) aerofoil; RAE 5243 for Active Flow Control (AFC). A SCB approach is used to decelerate supersonic flow on the suction/pressure sides of transonic aerofoil that leads delaying shock occurrence or weakening of shock strength. Such an AFC technique reduces significantly the total drag at transonic speeds. This chapter considers the SCB shape design optimisation at two boundary layer transition positions (0 and 45%) using an Euler software coupled with viscous boundary layer effects and robust Evolutionary Algorithms (EAs). The optimisation method is based on a canonical Evolution Strategy (ES) algorithm and incorporates the concepts of hierarchical topology and parallel asynchronous evaluation of candidate solution. Two test cases are considered with numerical experiments; the first test deals with a transition point occurring at the leading edge and the transition point is fixed at 45% of wing chord in the second test. Numerical results are presented and it is demonstrated that an optimal SCB design can be found to significantly reduce transonic wave drag and improves lift on drag (L/D) value when compared to the baseline aerofoil design.