547 resultados para Low-cycle fatigue
Resumo:
A novel technique was used to measure emission factors for commonly used commercial aircraft including a range of Boeing and Airbus airframes under real world conditions. Engine exhaust emission factors for particles in terms of particle number and mass (PM2.5), along with those for CO2, and NOx were measured for over 280 individual aircraft during the various modes of landing/takeoff (LTO) cycle. Results from this study show that particle number, and NOx emission factors are dependant on aircraft engine thrust level. Minimum and maximum emissions factors for particle number, PM2.5, and NOx emissions were found to be in the range of 4.16×1015-5.42×1016 kg-1, 0.03-0.72 g.kg-1, and 3.25-37.94 g.kg-1 respectively for all measured airframes and LTO cycle modes. Number size distributions of emitted particles for the naturally diluted aircraft plumes in each mode of LTO cycle showed that particles were predominantly in the range of 4 to 100 nm in diameter in all cases. In general, size distributions exhibit similar modality during all phases of the LTO cycle. A very distinct nucleation mode was observed in all particle size distributions, except for taxiing and landing of A320 aircraft. Accumulation modes were also observed in all particle size distributions. Analysis of aircraft engine emissions during LTO cycle showed that aircraft thrust level is considerably higher during taxiing than idling suggesting that International Civil Aviation Organization (ICAO) standards need to be modified as the thrust levels for taxi and idle are considered to be the same (7% of total thrust) [1].
Resumo:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
Resumo:
This paper traces the history of store (retailer-controlled) and national (manufacture controlled)brands; identifies the key historical characteristics of the past 200 years of marketing history;describes the four main time periods of U.S. retail marketing (1800 - 2000); and comments on the most likely developments within the current phases of brand marketing. Will the future focus on technology and new forms of communications? The Internet exemplifies an unconventional retailing environment, with etailer numbers growing rapidly. The central proposition of this paper is that a "cycle of control" - a pattern of marketing developments within the history of retailing and national marketing communications - Can indicate the success of marketing strategies in the future.
Resumo:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
Resumo:
PURPOSE: To explore the effects of glaucoma and aging on low-spatial-frequency contrast sensitivity by using tests designed to assess performance of either the magnocellular (M) or parvocellular (P) visual pathways. METHODS: Contrast sensitivity was measured for spatial frequencies of 0.25 to 2 cyc/deg by using a published steady- and pulsed-pedestal approach. Sixteen patients with glaucoma and 16 approximately age-matched control subjects participated. Patients with glaucoma were tested foveally and at two midperipheral locations: (1) an area of early visual field loss, and (2) an area of normal visual field. Control subjects were assessed in matched locations. An additional group of 12 younger control subjects (aged 20-35 years) were also tested. RESULTS: Older control subjects demonstrated reduced sensitivity relative to the younger group for the steady (presumed M)- and pulsed (presumed P)-pedestal conditions. Sensitivity was reduced foveally and in the midperiphery across the spatial frequency range. In the area of early visual field loss, the glaucoma group demonstrated further sensitivity reduction relative to older control subjects across the spatial frequency range for both the steady- and pulsed-pedestal tasks. Sensitivity was also reduced in the midperipheral location of "normal" visual field for the pulsed condition. CONCLUSIONS: Normal aging results in a reduction of contrast sensitivity for the low-spatial-frequency-sensitive components of both the M and P pathways. Glaucoma results in a further reduction of sensitivity that is not selective for M or P function. The low-spatial-frequency-sensitive channels of both pathways, which are presumably mediated by cells with larger receptive fields, are approximately equivalently impaired in early glaucoma.
Resumo:
CFO and I/Q mismatch could cause significant performance degradation to OFDM systems. Their estimation and compensation are generally difficult as they are entangled in the received signal. In this paper, we propose some low-complexity estimation and compensation schemes in the receiver, which are robust to various CFO and I/Q mismatch values although the performance is slightly degraded for very small CFO. These schemes consist of three steps: forming a cosine estimator free of I/Q mismatch interference, estimating I/Q mismatch using the estimated cosine value, and forming a sine estimator using samples after I/Q mismatch compensation. These estimators are based on the perception that an estimate of cosine serves much better as the basis for I/Q mismatch estimation than the estimate of CFO derived from the cosine function. Simulation results show that the proposed schemes can improve system performance significantly, and they are robust to CFO and I/Q mismatch.
Resumo:
The construction industry has adapted information technology in its processes in terms of computer aided design and drafting, construction documentation and maintenance. The data generated within the construction industry has become increasingly overwhelming. Data mining is a sophisticated data search capability that uses classification algorithms to discover patterns and correlations within a large volume of data. This paper presents the selection and application of data mining techniques on maintenance data of buildings. The results of applying such techniques and potential benefits of utilising their results to identify useful patterns of knowledge and correlations to support decision making of improving the management of building life cycle are presented and discussed.
Resumo:
The building life cycle process is complex and prone to fragmentation as it moves through its various stages. The number of participants, and the diversity, specialisation and isolation both in space and time of their activities, have dramatically increased over time. The data generated within the construction industry has become increasingly overwhelming. Most currently available computer tools for the building industry have offered productivity improvement in the transmission of graphical drawings and textual specifications, without addressing more fundamental changes in building life cycle management. Facility managers and building owners are primarily concerned with highlighting areas of existing or potential maintenance problems in order to be able to improve the building performance, satisfying occupants and minimising turnover especially the operational cost of maintenance. In doing so, they collect large amounts of data that is stored in the building’s maintenance database. The work described in this paper is targeted at adding value to the design and maintenance of buildings by turning maintenance data into information and knowledge. Data mining technology presents an opportunity to increase significantly the rate at which the volumes of data generated through the maintenance process can be turned into useful information. This can be done using classification algorithms to discover patterns and correlations within a large volume of data. This paper presents how and what data mining techniques can be applied on maintenance data of buildings to identify the impediments to better performance of building assets. It demonstrates what sorts of knowledge can be found in maintenance records. The benefits to the construction industry lie in turning passive data in databases into knowledge that can improve the efficiency of the maintenance process and of future designs that incorporate that maintenance knowledge.
Resumo:
From an initial sample of 747 primary school students, the top 16 percent (n =116) with high self-esteem (HSE) and the bottom 15 percent (n = I1 I) with low selfesteem (LSE) were se/eeted. These two groups were then compared on personal and classroom variables. Significant differences were found for all personal (self-talk, selfconcepts) and classroom (teacher feedback, praise, teacher-student relationship, and classroom environment) variables. Students with HSE scored more highly on all variables. Discriminant Function Analysis (DFA) was then used to determine which variables discriminated between these two groups of students. Learner self-concept, positive and negative self-talk, classroom environment, and effort feedback were the best discriminators of students with high and low self-esteem. Implications for educational psychologists and teachers are discussed.
Resumo:
The report presents a methodology for whole of life cycle cost analysis of alternative treatment options for bridge structures, which require rehabilitation. The methodology has been developed after a review of current methods and establishing that a life cycle analysis based on a probabilistic risk approach has many advantages including the essential ability to consider variability of input parameters. The input parameters for the analysis are identified as initial cost, maintenance, monitoring and repair cost, user cost and failure cost. The methodology utilizes the advanced simulation technique of Monte Carlo simulation to combine a number of probability distributions to establish the distribution of whole of life cycle cost. In performing the simulation, the need for a powerful software package, which would work with spreadsheet program, has been identified. After exploring several products on the market, @RISK software has been selected for the simulation. In conclusion, the report presents a typical decision making scenario considering two alternative treatment options.
Resumo:
Background: Although low back pain (LBP) is an important issue for the health profession, few studies have examined LBP among occupational therapy students. Purpose. To investigate the prevalence and distribution of LBP, its adverse sequelae; and to identify potential risk factors.----------- Methods: In 2005, a self-reported questionnaire was administered to occupational therapy students in Northern Queensland.----------- Findings: The 12-month period-prevalence of LBP was 64.6%. Nearly half (46.9%) had experienced pain for over 2 days, 38.8% suffered LBP that affected their daily lives, and 24.5% had sought medical treatment. The prevalence of LBP ranged from 45.5 to 77.1% (p=0.004), while the prevalence of LBP symptoms persisting longer than two days was 34.1 to 62.5% (p=0.020). Logistic regression analysis indicated that year of study and weekly computer usage were statistically-significant LBP risk factors.----------- Implications: The occupational therapy profession will need to further investigate the high prevalence of student LBP identified in this study.
Resumo:
n design of bridge structures, it is common to adopt a 100 year design life. However, analysis of a number of case study bridges in Australia has indicated that the actual design life can be significantly reduced due to premature deterioration resulting from exposure to aggressive environments. A closer analysis of the cost of rehabilitation of these structures has raised some interesting questions. What would be the real service life of a bridge exposed to certain aggressive environments? What is the strategy of conducting bridge rehabilitation? And what are the life cycle costs associated with rehabilitation? A research project funded by the CRC for Construction Innovation in Australia is aimed at addressing these issues. This paper presents a concept map for assisting decision makers to appropriately choose the best treatment for bridge rehabilitation affected by premature deterioration through exposure to aggressive environments in Australia. The decision analysis is referred to a whole of life cycle cost analysis by considering appropriate elements of bridge rehabilitation costs. In addition, the results of bridges inspections in Queensland are presented
Resumo:
Biological tissues are subjected to complex loading states in vivo and in order to define constitutive equations that effectively simulate their mechanical behaviour under these loads, it is necessary to obtain data on the tissue's response to multiaxial loading. Single axis and shear testing of biological tissues is often carried out, but biaxial testing is less common. We sought to design and commission a biaxial compression testing device, capable of obtaining repeatable data for biological samples. The apparatus comprised a sealed stainless steel pressure vessel specifically designed such that a state of hydrostatic compression could be created on the test specimen while simultaneously unloading the sample along one axis with an equilibrating tensile pressure. Thus a state of equibiaxial compression was created perpendicular to the long axis of a rectangular sample. For the purpose of calibration and commissioning of the vessel, rectangular samples of closed cell ethylene vinyl acetate (EVA) foam were tested. Each sample was subjected to repeated loading, and nine separate biaxial experiments were carried out to a maximum pressure of 204 kPa (30 psi), with a relaxation time of two hours between them. Calibration testing demonstrated the force applied to the samples had a maximum error of 0.026 N (0.423% of maximum applied force). Under repeated loading, the foam sample demonstrated lower stiffness during the first load cycle. Following this cycle, an increased stiffness, repeatable response was observed with successive loading. While the experimental protocol was developed for EVA foam, preliminary results on this material suggest that this device may be capable of providing test data for biological tissue samples. The load response of the foam was characteristic of closed cell foams, with consolidation during the early loading cycles, then a repeatable load-displacement response upon repeated loading. The repeatability of the test results demonstrated the ability of the test device to provide reproducible test data and the low experimental error in the force demonstrated the reliability of the test data.
Resumo:
Queensland Department of Main Roads, Australia, spends approximately A$ 1 billion annually for road infrastructure asset management. To effectively manage road infrastructure, firstly road agencies not only need to optimise the expenditure for data collection, but at the same time, not jeopardise the reliability in using the optimised data to predict maintenance and rehabilitation costs. Secondly, road agencies need to accurately predict the deterioration rates of infrastructures to reflect local conditions so that the budget estimates could be accurately estimated. And finally, the prediction of budgets for maintenance and rehabilitation must provide a certain degree of reliability. This paper presents the results of case studies in using the probability-based method for an integrated approach (i.e. assessing optimal costs of pavement strength data collection; calibrating deterioration prediction models that suit local condition and assessing risk-adjusted budget estimates for road maintenance and rehabilitation for assessing life-cycle budget estimates). The probability concept is opening the path to having the means to predict life-cycle maintenance and rehabilitation budget estimates that have a known probability of success (e.g. produce budget estimates for a project life-cycle cost with 5% probability of exceeding). The paper also presents a conceptual decision-making framework in the form of risk mapping in which the life-cycle budget/cost investment could be considered in conjunction with social, environmental and political issues.