19 resultados para Converts

em Queensland University of Technology - ePrints Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research the reliability and availability of fiberboard pressing plant is assessed and a cost-based optimization of the system using the Monte- Carlo simulation method is performed. The woodchip and pulp or engineered wood industry in Australia and around the world is a lucrative industry. One such industry is hardboard. The pressing system is the main system, as it converts the wet pulp to fiberboard. The assessment identified the pressing system has the highest downtime throughout the plant plus it represents the bottleneck in the process. A survey in the late nineties revealed there are over one thousand plants around the world, with the pressing system being a common system among these plants. No work has been done to assess or estimate the reliability of such a pressing system; therefore this assessment can be used for assessing any plant of this type.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work was derived from terrestrial laser scan data of a bio-diverse landscape on the SE coast of Western Australia. The scanning was conducted both before and after a significant bushfire event. The digital three dimensional scan data has been converged and then abstracted into a two dimensional vertical sections or slice which reveals the vegetal surface of heath vegetation and the surface of the landform.---------- This abstraction converts the complex data into spatial information so that it is meaningful in the context the architectural and landscape architectural design process. The primary intention behind the production of the work was to expand understanding on the means of representing and then designing for sites in ‘kwongan’ landscapes which are constituted by highly biodiverse, bushfire prone heath vegetation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work was derived from terrestrial laser scan data of a bio-diverse landscape on the SE coast of Western Australia. The three dimensional scan data has been abstracted into two dimensional side sections or slices – in a manner which converts the complex data into spatial information which is meaningful in the context of the act of architectural and landscape architectural design. The primary intention behind the production of the work was to expand understanding on the means of representation of ‘kwongan’ landscapes which are constituted by highly biodiverse - and thus difficult to measure - heath vegetation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work was derived from terrestrial laser scan data of a bio-diverse landscape on the SE coast of Western Australia. The digital three dimensional scan data has been abstracted into two dimensional horizontal sections or slices. This abstraction converts the complex data into spatial information which is meaningful in the context of the act of architectural and landscape architectural design. The primary intention behind the production of the work was to expand understanding on the means of representing and then designing for sites in 'kwongan' landscapes which are constituted by highly biodiverse - and thus difficult to measure - heath vegetation. From Heathprint the author generated contour intervals of the landform upon which an associate (Daniela Simon architect) designed a work of architecture, which subsequently was awarded a commendation for residential architecture in the WA Chapter Australia Institute of Architects awards 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Monte Carlo DICOM Tool-Kit (MCDTK) is a software suite designed for treatment plan dose verification, using the BEAMnrc and DOSXYZnrc Monte Carlo codes. MCDTK converts DICOM-format treatment plan information into Monte Carlo input files and compares the results of Monte Carlo treatment simulations with conventional treatment planning dose calculations. In this study, a treatment is planned using a commercial treatment planning system, delivered to a pelvis phantom containing ten thermoluminescent dosimeters and simulated using BEAMnrc and DOSXYZnrc using inputs derived from MCDTK. The dosimetric accuracy of the Monte Carlo data is then evaluated via comparisons with the dose distribution obtained from the treatment planning system as well as the in-phantom point dose measurements. The simulated beam arrangement produced by MCDTK is found to be in geometric agreement with the planned treatment. An isodose display generated from the Monte Carlo data by MCDTK shows general agreement with the isodose display obtained from the treatment planning system, except for small regions around density heterogeneities in the phantom, where the pencil-beam dose calculation performed by the treatment planning systemis likely to be less accurate. All point dose measurements agree with the Monte Carlo data obtained using MCDTK, within confidence limits, and all except one of these point dose measurements show closer agreement with theMonte Carlo data than with the doses calculated by the treatment planning system. This study provides a simple demonstration of the geometric and dosimetric accuracy ofMonte Carlo simulations based on information from MCDTK.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kaolinite:NaCl intercalates with basal layer dimensions of 0.95 and 1.25 nm have been prepared by direct reaction of saturated aqueous NaCl solution with well-crystallized source clay KGa-1. The intercalates and their thermal decomposition products have been studied by XRD, solid-state 23Na, 27Al, and 29Si MAS NMR, and FTIR. Intercalate yield is enhanced by dry grinding of kaolinite with NaCl prior to intercalation. The layered structure survives dehydroxylation of the kaolinite at 500°–600°C and persists to above 800°C with a resultant tetrahedral aluminosilicate framework. Excess NaCl can be readily removed by rinsing with water, producing an XRD ‘amorphous’ material. Upon heating at 900°C this material converts to a well-crystallized framework aluminosilicate closely related to low-camegieite, NaAlSiO4, some 350°C below its stability field. Reaction mechanisms are discussed and structural models proposed for each of these novel materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amorphous derivatives of kaolin group minerals characterized by high specific surfaces and/or high cation exchange capacities and a .sup.27 AL MAS NMR spectrum having a dominant peak at about 55 ppm relative to Al(H.sub.2 O).sub.6.sup.3+. Such derivatives are prepared by reacting a kaolin group mineral with a reagent, such as, an alkali metal halide or an ammonium halide which converts the majority of the octahedrally coordinated aluminum in the kaolin group mineral to tetrahedrally coordinated aluminum. Such derivatives show high selectivity in its cation exchange towards the metals: Pb.sup.2+, Cu.sup.2+, Cd.sup.2+, Ni.sup.2+, CO.sup.2+, Cr.sup.3+, Sr.sup.2-, Zn.sup.2+, Nd.sup.3+ and UO.sub.2.sup.+.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enterprise social networks are organizationally bounded online platforms for users to interact with another and maintain interpersonal relationships. The allure of these technologies is often seen in intra-organizational communication, collaboration and innovation. How these technologies actually support organizational innovation efforts remains unclear. A specific challenge is whether digital content on these platforms converts to actual innovation development efforts. In this study we set out to examine innovation-centric content flows on enterprise social networking platforms, and advance a conceptual model that seeks to explain which innovation conveyed in the digital content will traverse from the digital platform into regular processes. We describe important constructs of our model and offer strategies for the operationalization of the constructs. We conclude with an outlook to our ongoing empirical study that will explore and validate the key propositions of our model, and we sketch some potential implications for industry and academia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust facial expression recognition (FER) under occluded face conditions is challenging. It requires robust algorithms of feature extraction and investigations into the effects of different types of occlusion on the recognition performance to gain insight. Previous FER studies in this area have been limited. They have spanned recovery strategies for loss of local texture information and testing limited to only a few types of occlusion and predominantly a matched train-test strategy. This paper proposes a robust approach that employs a Monte Carlo algorithm to extract a set of Gabor based part-face templates from gallery images and converts these templates into template match distance features. The resulting feature vectors are robust to occlusion because occluded parts are covered by some but not all of the random templates. The method is evaluated using facial images with occluded regions around the eyes and the mouth, randomly placed occlusion patches of different sizes, and near-realistic occlusion of eyes with clear and solid glasses. Both matched and mis-matched train and test strategies are adopted to analyze the effects of such occlusion. Overall recognition performance and the performance for each facial expression are investigated. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the high robustness and fast processing speed of our approach, and provide useful insight into the effects of occlusion on FER. The results on the parameter sensitivity demonstrate a certain level of robustness of the approach to changes in the orientation and scale of Gabor filters, the size of templates, and occlusions ratios. Performance comparisons with previous approaches show that the proposed method is more robust to occlusion with lower reductions in accuracy from occlusion of eyes or mouth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bit-stream-based control, which uses one bit wide signals to control power electronics applications, is a new approach for controller design in power electronic systems. This study presents a novel family of three-phase space vector modulators, which are based on the bit-stream technique and suitable for three-phase inverter systems. Each of the proposed modulators simultaneously converts a two-phase reference to the three-phase domain and reduces switching frequencies to reasonable levels. The modulators do not require carrier oscillators, trigonometric functions or, in some cases, sector detectors. A complete three-phase modulator can be implemented in as few as 102 logic elements. The performance of the proposed modulators is compared through simulation and experimental testing of a 6 kW, three-phase DC-to-AC inverter. Subject to limits on the modulation index, the proposed modulators deliver spread-spectrum output currents with total harmonic distortion comparable to a standard carrier-based space vector pulse width modulator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard signature schemes are usually designed only to achieve weak unforgeability – i.e. preventing forgery of signatures on new messages not previously signed. However, most signature schemes are randomised and allow many possible signatures for a single message. In this case, it may be possible to produce a new signature on a previously signed message. Some applications require that this type of forgery also be prevented – this requirement is called strong unforgeability. At PKC2006, Boneh Shen and Waters presented an efficient transform based on any randomised trapdoor hash function which converts a weakly unforgeable signature into a strongly unforgeable signature and applied it to construct a strongly unforgeable signature based on the CDH problem. However, the transform of Boneh et al only applies to a class of so-called partitioned signatures. Although many schemes fall in this class, some do not, for example the DSA signature. Hence it is natural to ask whether one can obtain a truly generic efficient transform based on any randomised trapdoor hash function which converts any weakly unforgeable signature into a strongly unforgeable one. We answer this question in the positive by presenting a simple modification of the Boneh-Shen-Waters transform. Our modified transform uses two randomised trapdoor hash functions.