966 resultados para Cynicism reason
Resumo:
Cryopreservation plays a significant function in tissue banking and will presume yet larger value when more and more tissue-engineered products will routinely enter the clinical arena. The most common concept underlying tissue engineering is to combine a scaffold (cellular solids) or matrix (hydrogels) with living cells to form a tissue-engineered construct (TEC) to promote the repair and regeneration of tissues. The scaffold and matrix are expected to support cell colonization, migration, growth and differentiation, and to guide the development of the required tissue. The promises of tissue engineering, however, depend on the ability to physically distribute the products to patients in need. For this reason, the ability to cryogenically preserve not only cells, but also TECs, and one day even whole laboratory-produced organs, may be indispensable. Cryopreservation can be achieved by conventional freezing and vitrification (ice-free cryopreservation). In this publication we try to define the needs versus the desires of vitrifying TECs, with particular emphasis on the cryoprotectant properties, suitable materials and morphology. It is concluded that the formation of ice, through both direct and indirect effects, is probably fundamental to these difficulties, and this is why vitrification seems to be the most promising modality of cryopreservation
Resumo:
Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.
Resumo:
The concept of asset management is not a new but an evolving idea that has been attracting attention of many organisations operating and/or owning some kind of infrastructure assets. The term asset management have been used widely with fundamental differences in interpretation and usage. Regardless of the context of the usage of the term, asset management implies the process of optimising return by scrutinising performance and making key strategic decisions throughout all phases of an assets lifecycle (Sarfi and Tao, 2004). Hence, asset management is a philosophy and discipline through which organisations are enabled to more effectively deploy their resources to provide higher levels of customer service and reliability while balancing financial objectives. In Australia, asset management made its way into the public works in 1993 when the Australian Accounting Standard Board issued the Australian Accounting Standard 27 – AAS27. Standard AAS27 required government agencies to capitalise and depreciate assets rather than expense them against earnings. This development has indirectly forced organisations managing infrastructure assets to consider the useful life and cost effectiveness of asset investments. The Australian State Treasuries and the Australian National Audit Office was the first organisation to formalise the concepts and principles of asset management in Australia in which they defined asset management as “ a systematic, structured process covering the whole life of an asset”(Australian National Audit Office, 1996). This initiative led other Government bodies and industry sectors to develop, refine and apply the concept of asset management in the management of their respective infrastructure assets. Hence, it can be argued that the concept of asset management has emerged as a separate and recognised field of management during the late 1990s. In comparison to other disciplines such as construction, facilities, maintenance, project management, economics, finance, to name a few, asset management is a relatively new discipline and is clearly a contemporary topic. The primary contributors to the literature in asset management are largely government organisations and industry practitioners. These contributions take the form of guidelines and reports on the best practice of asset management. More recently, some of these best practices have been made to become a standard such as the PAS 55 (IAM, 2004, IAM, 2008b) in UK. As such, current literature in this field tends to lack well-grounded theories. To-date, while receiving relatively more interest and attention from empirical researchers, the advancement of this field, particularly in terms of the volume of academic and theoretical development is at best moderate. A plausible reason for the lack of advancement is that many researchers and practitioners are still unaware of, or unimpressed by, the contribution that asset management can make to the performance of infrastructure asset. This paper seeks to explore the practices of organisations that manage infrastructure assets to develop a framework of strategic infrastructure asset management processes. It will begin by examining the development of asset management. This is followed by the discussion on the method to be adopted for this paper. Next, is the discussion of the result form case studies. It first describes the goals of infrastructure asset management and how they can support the broader business goals. Following this, a set of core processes that can support the achievement of business goals are provided. These core processes are synthesised based on the practices of asset managers in the case study organisations.
Resumo:
The increasing scarcity of water in the world, along with rapid population increase in urban areas, gives reason for concern and highlights the need for integrating water and wastewater management practices. The uncontrolled growth in urban areas has made planning, management and expansion of water and wastewater infrastructure systems very difficult and expensive. In order to achieve sustainable wastewater treatment and promote the conservation of water and nutrient resources, this chapter advocates the need for a closed-loop treatment system approach, and the transformation of the traditional linear treatment systems into integrated cyclical treatment systems. The recent increased understanding of integrated resource management and a shift towards sustainable management and planning of water and wastewater infrastructure are also discussed.
Resumo:
The field of collaborative health planning faces significant challenges posed by the lack of effective information, systems and a framework to organise that information. Such a framework is critical in order to make accessible and informed decisions for planning healthy cities. The challenges have been exaggerated by the rise of the healthy cities movement, as a result of which, there have been more frequent calls for localised, collaborative and evidence-based decision-making. Some studies suggest that the use of ICT-based tools in health planning may lead to: increased collaboration between stakeholder sand the community; improve the accuracy and quality of the decision making process; and, improve the availability of data and information for health decision-makers as well as health service planners. Research has justified the use of decision support systems (DSS) in planning for healthy cities as these systems have been found to improve the planning process. DSS are information communication technology (ICT) tools including geographic information systems (GIS) that provide the mechanisms to help decision-makers and related stake holders assess complex problems and solve these in a meaningful way. Consequently, it is now more possible than ever before to make use of ICT-based tools in health planning. However, knowledge about the nature and use of DSS within collaborative health planning is relatively limited. In particular, little research has been conducted in terms of evaluating the impact of adopting these tools upon stakeholders, policy-makers and decision-makers within the health planning field. This paper presents an integrated method that has been developed to facilitate an informed decision-making process to assist in the health planning process. Specifically, the paper describes the participatory process that has been adopted to develop an online GIS-based DSS for health planners. The literature states that the overall aim of DSS is to improve the efficiency of the decisions made by stakeholders, optimising their overall performance and minimizing judgmental biases. For this reason, the paper examines the effectiveness and impact of an innovative online GIS-based DSS on health planners. The case study of the online DSS is set within a unique settings-based initiative designed to plan for and improve the health capacity of Logan-Beaudesert area, Australia. This unique setting-based initiative is named the Logan-Beaudesert Health Coalition (LBHC).The paper outlines the impact occurred by implementing the ICT-based DSS. In conclusion, the paper emphasizes upon the need for the proposed tool for enhancing health planning.
Resumo:
The urban waterfront may be regarded as the littoral frontier of human settlement. Typically, over the years, it advances, sometimes retreats, where terrestrial and aquatic processes interact and frequently contest this margin of occupation. Because most towns and cities are sited beside water bodies, many of these urban centers on or close to the sea, their physical expansion is constrained by the existence of aquatic areas in one or more directions from the core. It is usually much easier for new urban development to occur along or inland from the waterfront. Where other physical constraints, such as rugged hills or mountains, make expansion difficult or expensive, building at greater densities or construction on steep slopes is a common response. This kind of development, though technically feasible, is usually more expensive than construction on level or gently sloping land, however. Moreover, there are many reasons for developing along the shore or riverfront in preference to using sites further inland. The high cost of developing existing dry land that presents serious construction difficulties is one reason for creating new land from adjacent areas that are permanently or periodically under water. Another reason is the relatively high value of artificially created land close to the urban centre when compared with the value of existing developable space at a greater distance inland. The creation of space for development is not the only motivation for urban expansion into aquatic areas. Commonly, urban places on the margins of the sea, estuaries, rivers or great lakes are, or were once, ports where shipping played an important role in the economy. The demand for deep waterfronts to allow ships to berth and for adjacent space to accommodate various port facilities has encouraged the advance of the urban land area across marginal shallows in ports around the world. The space and locational demands of port related industry and commerce, too, have contributed to this process. Often closely related to these developments is the generation of waste, including domestic refuse, unwanted industrial by-products, site formation and demolition debris and harbor dredgings. From ancient times, the foreshore has been used as a disposal area for waste from nearby settlements, a practice that continues on a huge scale today. Land formed in this way has long been used for urban development, despite problems that can arise from the nature of the dumped material and the way in which it is deposited. Disposal of waste material is a major factor in the creation of new urban land. Pollution of the foreshore and other water margin wetlands in this way encouraged the idea that the reclamation of these areas may be desirable on public health grounds. With reference to examples from various parts of the world, the historical development of the urban littoral frontier and its effects on the morphology and character of towns and cities are illustrated and discussed. The threat of rising sea levels and the heritage value of many waterfront areas are other considerations that are addressed.
Resumo:
The motivation for secondary school principals in Queensland, Australia, to investigate curriculum change coincided with the commencement in 2005 of the state government’s publication of school exit test results as a measure of accountability. Aligning the schools’ curriculum with the requirements of high-stakes testing is considered by many academics and teachers as negative outcome of accountability for reasons such as ‘teaching to the test’ and narrowing the curriculum. However, this article outlines empirical evidence that principals are instigating curriculum change to improve published high-stakes test results. Three principals in this study offered several reasons as to why they wished to implement changes to school curricula. One reason articulated by all three was the pressures of accountability, particularly through the publication of high-stakes test data which has now become commonplace in education systems of many Western Nations.
Resumo:
Stereotypes of salespeople are common currency in US media outlets and research suggests that these stereotypes are uniformly negative. However, there is no reason to expect that stereotypes will be consistent across cultures. The present paper provides the first empirical examination of salesperson stereotypes in an Asian country, specifically Taiwan. Using accepted psychological methods, Taiwanese salesperson stereotypes are found to be twofold, with a negative stereotype being quite congruent with existing US stereotypes, but also a positive stereotype, which may be related to the specific culture of Taiwan.
Resumo:
Bronfenbrenner.s Bioecological Model, expressed as the developmental equation, D f PPCT, is the theoretical framework for two studies that bring together diverse strands of psychology to study the work-life interface of working adults. Occupational and organizational psychology is focused on the demands and resources of work and family, without emphasising the individual in detail. Health and personality psychology examine the individual but without emphasis on the individual.s work and family roles. The current research used Bronfenbrenner.s theoretical framework to combine individual differences, work and family to understand how these factors influence the working adult.s psychological functioning. Competent development has been defined as high well-being (measured as life satisfaction and psychological well-being) and high work engagement (as work vigour, work dedication and absorption in work) and as the absence of mental illness (as depression, anxiety and stress) and the absence of burnout (as emotional exhaustion, cynicism and professional efficacy). Study 1 and 2 were linked, with Study 1 as a cross-sectional survey and Study 2, a prospective panel study that followed on from the data used in Study1. Participants were recruited from a university and from a large public hospital to take part in a 3-wave, online study where they completed identical surveys at 3-4 month intervals (N = 470 at Time 1 and N = 198 at Time 3). In Study 1, hierarchical multiple regressions were used to assess the effects of individual differences (Block 1, e.g. dispositional optimism, coping self-efficacy, perceived control of time, humour), work and family variables (Block 2, e.g. affective commitment, skill discretion, work hours, children, marital status, family demands) and the work-life interface (Block 3, e.g. direction and quality of spillover between roles, work-life balance) on the outcomes. There were a mosaic of predictors of the outcomes with a group of seven that were the most frequent significant predictors and which represented the individual (dispositional optimism and coping self-efficacy), the workplace (skill discretion, affective commitment and job autonomy) and the work-life interface (negative work-to-family spillover and negative family-to-work spillover). Interestingly, gender and working hours were not important predictors. The effects of job social support, generally and for work-life issues, perceived control of time and egalitarian gender roles on the outcomes were mediated by negative work-to-family spillover, particularly for emotional exhaustion. Further, the effect of negative spillover on depression, anxiety and work engagement was moderated by the individual.s personal and workplace resources. Study 2 modelled the longitudinal relationships between the group of the seven most frequent predictors and the outcomes. Using a set of non-nested models, the relative influences of concurrent functioning, stability and change over time were assessed. The modelling began with models at Time 1, which formed the basis for confirmatory factor analysis (CFA) to establish the underlying relationships between the variables and calculate the composite variables for the longitudinal models. The CFAs were well fitting with few modifications to ensure good fit. However, using burnout and work engagement together required additional analyses to resolve poor fit, with one factor (representing a continuum from burnout to work engagement) being the only acceptable solution. Five different longitudinal models were investigated as the Well-Being, Mental Distress, Well-Being-Mental Health, Work Engagement and Integrated models using differing combinations of the outcomes. The best fitting model for each was a reciprocal model that was trimmed of trivial paths. The strongest paths were the synchronous correlations and the paths within variables over time. The reciprocal paths were more variable with weak to mild effects. There was evidence of gain and loss spirals between the variables over time, with a slight net gain in resources that may provide the mechanism for the accumulation of psychological advantage over a lifetime. The longitudinal models also showed that there are leverage points at which personal, psychological and managerial interventions can be targeted to bolster the individual and provide supportive workplace conditions that also minimise negative spillover. Bronfenbrenner.s developmental equation has been a useful framework for the current research, showing the importance of the person as central to the individual.s experience of the work-life interface. By taking control of their own life, the individual can craft a life path that is most suited to their own needs. Competent developmental outcomes were most likely where the person was optimistic and had high self-efficacy, worked in a job that they were attached to and which allowed them to use their talents and without too much negative spillover between their work and family domains. In this way, individuals had greater well-being, better mental health and greater work engagement at any one time and across time.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Increases in atmospheric concentrations of the greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) due to human activities have been linked to climate change. GHG emissions from land use change and agriculture have been identified as significant contributors to both Australia’s and the global GHG budget. This is expected to increase over the coming decades as rates of agriculture intensification and land use change accelerate to support population growth and food production. Limited data exists on CO2, CH4 and N2O trace gas fluxes from subtropical or tropical soils and land uses. To develop effective mitigation strategies a full global warming potential (GWP) accounting methodology is required that includes emissions of the three primary greenhouse gases. Mitigation strategies that focus on one gas only can inadvertently increase emissions of another. For this reason, detailed inventories of GHGs from soils and vegetation under individual land uses are urgently required for subtropical Australia. This study aimed to quantify GHG emissions over two consecutive years from three major land uses; a well-established, unfertilized subtropical grass-legume pasture, a 30 year (lychee) orchard and a remnant subtropical Gallery rainforest, all located near Mooloolah, Queensland. GHG fluxes were measured using a combination of high resolution automated sampling, coarser spatial manual sampling and laboratory incubations. Comparison between the land uses revealed that land use change can have a substantial impact on the GWP on a landscape long after the deforestation event. The conversion of rainforest to agricultural land resulted in as much as a 17 fold increase in GWP, from 251 kg CO2 eq. ha-1 yr-1 in the rainforest to 889 kg CO2 eq. ha-1 yr-1 in the pasture to 2538 kg CO2 eq. ha-1 yr-1 in the lychee plantation. This increase resulted from altered N cycling and a reduction in the aerobic capacity of the soil in the pasture and lychee systems, enhancing denitrification and nitrification events, and reducing atmospheric CH4 uptake in the soil. High infiltration, drainage and subsequent soil aeration under the rainforest limited N2O loss, as well as promoting CH4 uptake of 11.2 g CH4-C ha-1 day-1. This was among the highest reported for rainforest systems, indicating that aerated subtropical rainforests can act as substantial sink of CH4. Interannual climatic variation resulted in significantly higher N2O emission from the pasture during 2008 (5.7 g N2O-N ha day) compared to 2007 (3.9 g N2O-N ha day), despite receiving nearly 500 mm less rainfall. Nitrous oxide emissions from the pasture were highest during the summer months and were highly episodic, related more to the magnitude and distribution of rain events rather than soil moisture alone. Mean N2O emissions from the lychee plantation increased from an average of 4.0 g N2O-N ha-1 day-1, to 19.8 g N2O-N ha-1 day-1 following a split application of N fertilizer (560 kg N ha-1, equivalent to 1 kg N tree-1). The timing of the split application was found to be critical to N2O emissions, with over twice as much lost following an application in spring (emission factor (EF): 1.79%) compared to autumn (EF: 0.91%). This was attributed to the hot and moist climatic conditions and a reduction in plant N uptake during the spring creating conditions conducive to N2O loss. These findings demonstrate that land use change in subtropical Australia can be a significant source of GHGs. Moreover, the study shows that modifying the timing of fertilizer application can be an efficient way of reducing GHG emissions from subtropical horticulture.
Resumo:
AC motors are largely used in a wide range of modern systems, from household appliances to automated industry applications such as: ventilations systems, fans, pumps, conveyors and machine tool drives. Inverters are widely used in industrial and commercial applications due to the growing need for speed control in ASD systems. Fast switching transients and the common mode voltage, in interaction with parasitic capacitive couplings, may cause many unwanted problems in the ASD applications. These include shaft voltage and leakage currents. One of the inherent characteristics of Pulse Width Modulation (PWM) techniques is the generation of the common mode voltage, which is defined as the voltage between the electrical neutral of the inverter output and the ground. Shaft voltage can cause bearing currents when it exceeds the amount of breakdown voltage level of the thin lubricant film between the inner and outer rings of the bearing. This phenomenon is the main reason for early bearing failures. A rapid development in power switches technology has lead to a drastic decrement of switching rise and fall times. Because there is considerable capacitance between the stator windings and the frame, there can be a significant capacitive current (ground current escaping to earth through stray capacitors inside a motor) if the common mode voltage has high frequency components. This current leads to noises and Electromagnetic Interferences (EMI) issues in motor drive systems. These problems have been dealt with using a variety of methods which have been reported in the literature. However, cost and maintenance issues have prevented these methods from being widely accepted. Extra cost or rating of the inverter switches is usually the price to pay for such approaches. Thus, the determination of cost-effective techniques for shaft and common mode voltage reduction in ASD systems, with the focus on the first step of the design process, is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. Electrical power generation from renewable energy sources, such as wind energy systems, has become a crucial issue because of environmental problems and a predicted future shortage of traditional energy sources. Thus, Chapter 2 focuses on the shaft voltage analysis of stator-fed induction generators (IG) and Doubly Fed Induction Generators DFIGs in wind turbine applications. This shaft voltage analysis includes: topologies, high frequency modelling, calculation and mitigation techniques. A back-to-back AC-DC-AC converter is investigated in terms of shaft voltage generation in a DFIG. Different topologies of LC filter placement are analysed in an effort to eliminate the shaft voltage. Different capacitive couplings exist in the motor/generator structure and any change in design parameters affects the capacitive couplings. Thus, an appropriate design for AC motors should lead to the smallest possible shaft voltage. Calculation of the shaft voltage based on different capacitive couplings, and an investigation of the effects of different design parameters are discussed in Chapter 3. This is achieved through 2-D and 3-D finite element simulation and experimental analysis. End-winding parameters of the motor are also effective factors in the calculation of the shaft voltage and have not been taken into account in previous reported studies. Calculation of the end-winding capacitances is rather complex because of the diversity of end winding shapes and the complexity of their geometry. A comprehensive analysis of these capacitances has been carried out with 3-D finite element simulations and experimental studies to determine their effective design parameters. These are documented in Chapter 4. Results of this analysis show that, by choosing appropriate design parameters, it is possible to decrease the shaft voltage and resultant bearing current in the primary stage of generator/motor design without using any additional active and passive filter-based techniques. The common mode voltage is defined by a switching pattern and, by using the appropriate pattern; the common mode voltage level can be controlled. Therefore, any PWM pattern which eliminates or minimizes the common mode voltage will be an effective shaft voltage reduction technique. Thus, common mode voltage reduction of a three-phase AC motor supplied with a single-phase diode rectifier is the focus of Chapter 5. The proposed strategy is mainly based on proper utilization of the zero vectors. Multilevel inverters are also used in ASD systems which have more voltage levels and switching states, and can provide more possibilities to reduce common mode voltage. A description of common mode voltage of multilevel inverters is investigated in Chapter 6. Chapter 7 investigates the elimination techniques of the shaft voltage in a DFIG based on the methods presented in the literature by the use of simulation results. However, it could be shown that every solution to reduce the shaft voltage in DFIG systems has its own characteristics, and these have to be taken into account in determining the most effective strategy. Calculation of the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions is discussed in Chapter 8. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.
Resumo:
The human knee acts as a sophisticated shock absorber during landing movements. The ability of the knee to perform this function in the real world is remarkable given that the context of the landing movement may vary widely between performances. For this reason, humans must be capable of rapidly adjusting the mechanical properties of the knee under impact load in order to satisfy many competing demands. However, the processes involved in regulating these properties in response to changing constraints remain poorly understood. In particular, the effects of muscle fatigue on knee function during step landing are yet to be fully explored. Fatigue of the knee muscles is significant for 2 reasons. First, it is thought to have detrimental effects on the ability of the knee to act as a shock absorber and is considered a risk factor for knee injury. Second, fatigue of knee muscles provides a unique opportunity to examine the mechanisms by which healthy individuals alter knee function. A review of the literature revealed that the effect of fatigue on knee function during landing has been assessed by comparing pre and postfatigue measurements, with fatigue induced by a voluntary exercise protocol. The information is limited by inconsistent results with key measures, such as knee stiffness, showing varying results following fatigue, including increased stiffness, decreased stiffness or failure to detect any change in some experiments. Further consideration of the literature questions the validity of the models used to induce and measure fatigue, as well as the pre-post study design, which may explain the lack of consensus in the results. These limitations cast doubt on the usefulness of the available information and identify a need to investigate alternative approaches. Based on the results of this review, the aims of this thesis were to: • evaluate the methodological procedures used in validation of a fatigue model • investigate the adaptation and regulation of post-impact knee mechanics during repeated step landings • use this new information to test the effects of fatigue on knee function during a step-landing task. To address the aims of the thesis, 3 related experiments were conducted that collected kinetic, kinematic and electromyographic data from 3 separate samples of healthy male participants. The methodologies involved optoelectronic motion capture (VICON), isokinetic dynamometry (System3 Pro, BIODEX) and wireless surface electromyography (Zerowire, Aurion, Italy). Fatigue indicators and knee function measures used in each experiment were derived from the data. Study 1 compared the validity and reliability of repetitive stepping and isokinetic contractions with respect to fatigue of the quadriceps and hamstrings. Fifteen participants performed 50 repetitions of each exercise twice in randomised order, over 4 sessions. Sessions were separated by a minimum of 1 week’s rest, to ensure full recovery. Validity and reliability depended on a complex interaction between the exercise protocol, the fatigue indicator, the individual and the muscle of interest. Nevertheless, differences between exercise protocols indicated that stepping was less effective in eliciting valid and reliable changes in peak power and spectral compression, compared with isokinetic exercise. A key finding was that fatigue progressed in a biphasic pattern during both exercises. The point separating the 2 phases, known as the transition point, demonstrated superior between-test reliability during the isokinetic protocol, compared with stepping. However, a correction factor should be used to accurately apply this technique to the study of fatigue during landing. Study 2 examined alterations in knee function during repeated landings, with a different sample (N =12) performing 60 consecutive step landing trials. Each landing trial was separated by 1-minute rest periods. The results provided new information in relation to the pre-post study design in the context of detecting adjustments in knee function during landing. First, participants significantly increased or decreased pre-impact muscle activity or post-impact mechanics despite environmental and task constraints remaining unchanged. This is the 1st study to demonstrate this effect in healthy individuals without external feedback on performance. Second, single-subject analysis was more effective in detecting alterations in knee function compared to group-level analysis. Finally, repeated landing trials did not reduce inter-trial variability of knee function in some participants, contrary to assumptions underpinning previous studies. The results of studies 1 and 2 were used to modify the design of Study 3 relative to previous research. These alterations included a modified isokinetic fatigue protocol, multiple pre-fatigue measurements and singlesubject analysis to detect fatigue-related changes in knee function. The study design incorporated new analytical approaches to investigate fatiguerelated alterations in knee function during landing. Participants (N = 16) were measured during multiple pre-fatigue baseline trial blocks prior to the fatigue model. A final block of landing trials was recorded once the participant met the operational fatigue definition that was identified in Study 1. The analysis revealed that the effects of fatigue in this context are heavily dependent on the compensatory response of the individual. A continuum of responses was observed within the sample for each knee function measure. Overall, preimpact preparation and post-impact mechanics of the knee were altered with highly individualised patterns. Moreover, participants used a range of active or passive pre-impact strategies to adapt post-impact mechanics in response to quadriceps fatigue. The unique patterns identified in the data represented an optimisation of knee function based on priorities of the individual. The findings of these studies explain the lack of consensus within the literature regarding the effects of fatigue on knee function during landing. First, functional fatigue protocols lack validity in inducing fatigue-related changes in mechanical output and spectral compression of surface electromyography (sEMG) signals, compared with isokinetic exercise. Second, fatigue-related changes in knee function during landing are confounded by inter-individual variation, which limits the sensitivity of group-level analysis. By addressing these limitations, the 3rd study demonstrated the efficacies of new experimental and analytical approaches to observe fatigue-related alterations in knee function during landing. Consequently, this thesis provides new perspectives into the effects of fatigue in knee function during landing. In conclusion: • The effects of fatigue on knee function during landing depend on the response of the individual, with considerable variation present between study participants, despite similar physical characteristics. • In healthy males, adaptation of pre-impact muscle activity and postimpact knee mechanics is unique to the individual and reflects their own optimisation of demands such as energy expenditure, joint stability, sensory information and loading of knee structures. • The results of these studies should guide future exploration of adaptations in knee function to fatigue. However, research in this area should continue with reduced emphasis on the directional response of the population and a greater focus on individual adaptations of knee function.
Resumo:
Ecological sustainable development (ESD), defined as that which meets the needs of the present without compromising the ability of future generations to meet their own needs, has much to offer in enhancing the quality of life of people and maintaining the environment for future generations by reducing the pollution of water, air and land, minimizing the destruction of irreplaceable ecosystems and cutting down the amount of toxic materials released. However, there is still much to do to achieve full implementation world-wide. This paper reports on three factors-design, attitudes and financial constraints - that are likely barriers to the implementation of ESD within the built environment in Australian industry. A postal questionnaire survey is described aimed at soliciting views on detailed aspects of the factors. This shows that ESD in the Australian built environment has also not been successfully implemented. The main reason is found to be due to the perceived costs involved - the cost of using environmental materials being a predominant factor. The design of ESD, being more sophisticated, also is perceived as involving stakeholders in more expense. There also appears to be a lack of knowledge and a lack of specialised and interdisciplinary design teams available in the Australian context.
Resumo:
This article considers the concept of media citizenship in relation to the digital strategies of the Special Broadcasting Service (SBS). At SBS, Australia’s multicultural public broadcaster, there is a critical appraisal of its strategies to harness user-created content (UCC) and social media to promote greater audience participation through its news and current affairs Web sites. The article looks at the opportunities and challenges that user-created content presents for public service media organizations as they consolidate multiplatform service delivery. Also analyzed are the implications of radio and television broadcasters’ moves to develop online services. It is proposed that case study methodologies enable an understanding of media citizenship to be developed that maintains a focus on the interaction between delivery technologies, organizational structures and cultures, and program content that is essential for understanding the changing focus of 21st-century public service media.