227 resultados para abstract
Resumo:
Patent systems around the world are being pressed to recognise and protect challengingly new and exciting subject matter in order to keep pace with the rapid technological advancement of our age and the fact we are moving into the era of the ‘knowledge economy’. This rapid development and pressure to expand the bounds of what has traditionally been recognised as patentable subject matter has created uncertainty regarding what it is that the patent system is actually supposed to protect. Among other things, the patent system has had to contend with uncertainty surrounding claims to horticultural and agricultural methods, artificial living micro-organisms, methods of treating the human body, computer software and business methods. The contentious issue of the moment is one at whose heart lies the important distinction between what is a mere abstract idea and what is properly an invention deserving of the monopoly protection afforded by a patent. That question is whether purely intangible inventions, being methods that do not involve a physical aspect or effect or cause a physical transformation of matter, constitute patentable subject matter. This paper goes some way to addressing these uncertainties by considering how the Australian approach to the question can be informed by developments arising in the United States of America, and canvassing some of the possible lessons we in Australia might learn from the approaches taken thus far in the United States.
Resumo:
Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.
Resumo:
An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.
Resumo:
Background The purpose of this study was to provide a detailed evaluation of adherence to nutrition supplements by patients with a lower limb fracture. Methods These descriptive data are from 49 nutritionally“ at-risk” patients aged 70+ years admitted to the hospital after a fall-related lower limb fracture and allocated to receive supplementation as part of a randomized, controlled trial. Supplementation commenced on day 7 and continued for 42 days. Prescribed volumes aimed to meet 45% of individually estimated theoretical energy requirements to meet the shortfall between literature estimates of energy intake and requirements. The supplement was administered by nursing staff on medication rounds in the acute or residential care settings and supervised through thrice-weekly home visits postdischarge. Results Median daily percent of the prescribed volume of nutrition supplement consumed averaged over the 42 days was 67% (interquartile range [IQR], 31–89, n = 49). There was no difference in adherence for gender, accommodation, cognition, or whether the supplement was self-administered or supervised. Twenty-three participants took some supplement every day, and a further 12 missed <5 days. For these 35 “nonrefusers,” adherence was 82% (IQR, 65–93), and they lost on average 0.7% (SD, 4.0%) of baseline weight over the 6 weeks of supplementation compared with a loss of 5.5% (SD, 5.4%) in the “refusers” (n = 14, 29%), p = .003. Conclusions We achieved better volume and energy consumption than previous studies of hip fracture patients but still failed to meet target supplement volumes prescribed to meet 45% of theoretical energy requirements. Clinicians should consider alternative methods of feeding such as a nasogastric tube, particularly in those patients where adherence to oral nutrition supplements is poor and dietary intake alone is insufficient to meet estimated energy requirements.
Resumo:
In this conversation, Kevin K. Kumashiro shares his reflections on challenges to publishing anti-oppressive research in educational journals. He then invites eight current and former editors of leading educational research journals--William F. Pinar, Elizabeth Graue, Carl A. Grant, Maenette K. P. Benham, Ronald H. Heck, James Joseph Scheurich, Allan Luke, and Carmen Luke--to critique and expand on his analysis. Kumashiro begins the conversation by describing his own experiences submitting manuscripts to educational research journals and receiving comments by anonymous reviewers and journal editors. He suggests three ways to rethink the collaborative potential of the peer-review process: as constructive, as multilensed, and as situated. The eight current and former editors of leading educational research journals then critique and expand Kumashiro's analysis. Kumashiro concludes the conversation with additional reflections on barriers and contradictions involved in advancing anti-oppressive educational research in educational journals. (Contains 3 notes.)
Resumo:
Articles > Journals > Health journals > Nutrition & Dietetics: The Journal of the Dieticians Association of Australia articles > March 2003 Article: An assessment of the potential of Family Day Care as a nutrition promotion setting in South Australia. (Original Research). Article from:Nutrition & Dietetics: The Journal of the Dieticians Association of Australia Article date:March 1, 2003 Author:Daniels, Lynne A.; Franco, Bunny; McWhinnie, Julie-Anne CopyrightCOPYRIGHT 2006 Dietitians Association of Australia. This material is published under license from the publisher through the Gale Group, Farmington Hills, Michigan. All inquiries regarding rights or concerns about this content should be directed to customer service. (Hide copyright information) Related articles Ads by Google TAFE Child Care Courses Government accredited courses. Study anytime, anywhere. www.seeklearning.com.au Get Work in Child Care Certificate III Children's Services 4 Day Course + Take Home Assessment HBAconsult.com.au Abstract Objective: To assess the potential role of Family Day Care in nutrition promotion for preschool children. Design and setting: A questionnaire to examine nutrition-related issues and practices was mailed to care providers registered in the southern region of Adelaide, South Australia. Care providers also supplied a descriptive, qualitative recall of the food provided by parents or themselves to each child less than five years of age in their care on the day closest to completion of the questionnaire. Subjects: 255 care providers. The response rate was 63% and covered 643 preschool children, mean 4.6 (SD 2.8) children per carer. Results: There was clear agreement that nutrition promotion was a relevant issue for Family Day Care providers. Nutrition and food hygiene knowledge was good but only 54% of respondents felt confident to address food quality issues with parents. Sixty-five percent of respondents reported non-neutral approaches to food refusal and dawdling (reward, punishment, cajoling) that overrode the child's control of the amount eaten. The food recalls indicated that most children (> 75%) were offered fruit at least once. Depending on the hours in care, (0 to 4, 5 to 8, greater than 8 hours), 20%, 32% and 55%, respectively, of children were offered milk and 65%, 82% and 87%, respectively, of children were offered high fat and sugar foods. Conclusions: Questionnaire responses suggest that many care providers are committed to and proactive in a range of nutrition promotion activities. There is scope for strengthening skills in the management of common problems, such as food refusal and dawdling, consistent with the current evidence for approaches to early feeding management that promote the development of healthy food preferences and eating patterns. Legitimising and empowering care providers in their nutrition promotion role requires clear policies, guide lines, adequate pre- and in-service training, suitable parent materials, and monitoring.
Resumo:
Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.
Resumo:
Flinders University and Queensland University of Technology, biofuels research interests cover a broad range of activities. Both institutions are seeking to overcome the twin evils of "peak oil" (Hubbert 1949 & 1956) and "global warming" (IPPC 2007, Stern 2006, Alison 2010), through development of Generation 1, 2 and 3 (Gen-1, 2 & 3) biofuels (Clarke 2008, Clarke 2010). This includes development of parallel Chemical Biorefinery, value-added, co-product chemical technologies, which can underpin the commercial viability of the biofuel industry. Whilst there is a focused effort to develop Gen-2 & 3 biofuels, thus avoiding the socially unacceptable use of food based Gen-1 biofuels, it must also be recognized that as yet, no country in the world has produced sustainable Gen-2 & 3 biofuel on a commercial basis. For example, in 2008 the United States used 38 billion litres (3.5% of total fuel use) of Gen-1 biofuel; in 2009/2010 this will be 47.5 billion litres (4.5% of fuel use) and in 2018 this has been estimated to rise to 96 billion litres (9% of total US fuel use). Brazil in 2008 produced 24.5 billion litres of ethanol, representing 37.3% of the world’s ethanol use for fuel and Europe, in 2008, produced 11.7 billion litres of biofuel (primarily as biodiesel). Compare this to Australia’s miserly biofuel production in 2008/2009 of 180 million litres of ethanol and 75 million litres of biodiesel, which is 0.4% of our fuel consumption! (Clarke, Graiver and Habibie 2010) To assist in the development of better biofuels technologies in the Asian developing regions the Australian Government recently awarded the Materials & BioEnergy Group from Flinders University, in partnership with the Queensland University of Technology, an Australian Leadership Award (ALA) Biofuel Fellowship program to train scientists from Indonesia and India about all facets of advanced biofuel technology.
Resumo:
Biodiesel is a renewable fuel that has been shown to reduce many exhaust emissions, except oxides of nitrogen (NOx), in diesel engine cars. This is of special concern in inner urban areas that are subject to strict environmental regulations, such as EURO norms. Also, the use of pure biodiesel (B100) is inhibited because of its higher NOx emissions compared to petroleum diesel fuel. The aim of this present work is to investigate the effect of the iodine value and cetane number of various biodiesel fuels obtained from different feed stocks on the combustion and NOx emission characteristics of a direct injection (DI) diesel engine. The biodiesel fuels were chosen from various feed stocks such as coconut, palm kernel, mahua (Madhuca indica), pongamia pinnata, jatropha curcas, rice bran, and sesame seed oils. The experimental results show an approximately linear relationship between iodine value and NOx emissions. The biodiesels obtained from coconut and palm kernel showed lower NOx levels than diesel, but other biodiesels showed an increase in NOx. It was observed that the nature of the fatty acids of the biodiesel fuels had a significant influence on the NOx emissions. Also, the cetane numbers of the biodiesel fuels are affected both premixed combustion and the combustion rate, which further affected the amount of NOx formation. It was concluded that NOx emissions are influenced by many parameters of biodiesel fuels, particularly the iodine value and cetane number.
Resumo:
The Denial of Service Testing Framework (dosTF) being developed as part of the joint India-Australia research project for ‘Protecting Critical Infrastructure from Denial of Service Attacks’ allows for the construction, monitoring and management of emulated Distributed Denial of Service attacks using modest hardware resources. The purpose of the testbed is to study the effectiveness of different DDoS mitigation strategies and to allow for the testing of defense appliances. Experiments are saved and edited in XML as abstract descriptions of an attack/defense strategy that is only mapped to real resources at run-time. It also provides a web-application portal interface that can start, stop and monitor an attack remotely. Rather than monitoring a service under attack indirectly, by observing traffic and general system parameters, monitoring of the target application is performed directly in real time via a customised SNMP agent.
Resumo:
In this paper, we present the design and construction of a prototype target tracking system. The experimental set up consists of three main modules for moving the object, detecting the motion of the object and its tracking. The mechanism for moving the object includes an object and two stepper motors and their driving and control circuitry. The detection of the object’s motion is realized by photo switch array. The tracking mechanism consists of a laser beam and two DC servomotors and their associated circuitry. The control algorithm is a standard fuzzy logic controller. The system is designed to operate in two modes in such a way that the role of target and tracker can be interchanged. Experimental results indicate that the fuzzy controller is capable of controlling the system in both modes.
Resumo:
Abstract Being as a relatively new approach of signalling, moving-block scheme significantly increases line capacity, especially on congested railways. This paper describes a simulation system for multi-train operation under moving-block signalling scheme. The simulator can be used to calculate minimum headways and safety characteristics under pre-set timetables or headways and different geographic and traction conditions. Advanced software techniques are adopted to support the flexibility within the simulator so that it is a general-purpose computer-aided design tool to evaluate the performance of moving block signalling.