964 resultados para Application method
Resumo:
Entity-oriented retrieval aims to return a list of relevant entities rather than documents to provide exact answers for user queries. The nature of entity-oriented retrieval requires identifying the semantic intent of user queries, i.e., understanding the semantic role of query terms and determining the semantic categories which indicate the class of target entities. Existing methods are not able to exploit the semantic intent by capturing the semantic relationship between terms in a query and in a document that contains entity related information. To improve the understanding of the semantic intent of user queries, we propose concept-based retrieval method that not only automatically identifies the semantic intent of user queries, i.e., Intent Type and Intent Modifier but introduces concepts represented by Wikipedia articles to user queries. We evaluate our proposed method on entity profile documents annotated by concepts from Wikipedia category and list structure. Empirical analysis reveals that the proposed method outperforms several state-of-the-art approaches.
Resumo:
This research proposes a method for identifying user expertise in contemporary Information Systems (IS). It also proposes and develops a model for evaluating expertise. The aim of this study was to offer a common instrument that addresses the requirements of a contemporary Information System in a holistic way. This study demonstrates the application of the expertise construct in Information System evaluations, and shows that users of different expertise levels evaluate systems differently.
Resumo:
The measurement of losses in high efficiency / high power converters is difficult. Measuring the losses directly from the difference between the input and output power results in large errors. Calorimetric methods are usually used to bypass this issue but introduce different problems, such as, long measurement times, limited power loss measurement range and/or large set up cost. In this paper the total losses of a converter are measured directly and switching losses are exacted. The measurements can be taken with only three multimeters and a current probe and a standard bench power supply. After acquiring two or three power loss versus output current sweeps, a series of curve fitting processes are applied and the switching losses extracted.
Resumo:
The key to reducing cost of electric vehicles is integration. All too often systems such as the motor, motor controller, batteries and vehicle chassis/body are considered as separate problems. The truth is that a lot of trade-offs can be made between these systems, causing an overall improvement in many areas including total cost. Motor controller and battery cost have a relatively simple relationship; the less energy lost in the motor controller the less energy that has to be carried in the batteries, hence the lower the battery cost. A motor controller’s cost is primarily influenced by the cost of the switches. This paper will therefore present a method of assessing the optimal switch selection on the premise that the optimal switch is the one that produces the lowest system cost, where system cost is the cost of batteries + switches.
Resumo:
This study aimed to identify new peptide antigens from Chlamydia (C.) trachomatis in a proof of concept approach which could be used to develop an epitope-based serological diagnostic for C. trachomatis related infertility in women. A bioinformatics analysis was conducted examining several immunodominant proteins from C. trachomatis to identify predicted immunoglobulin epitopes unique to C. trachomatis. A peptide array of these epitopes was screened against participant sera. The participants (all female) were categorized into the following cohorts based on their infection and gynecological history; acute (single treated infection with C. trachomatis), multiple (more than one C. trachomatis infection, all treated), sequelae (PID or tubal infertility with a history of C. trachomatis infection), and infertile (no history of C. trachomatis infection and no detected tubal damage). The bioinformatics strategy identified several promising epitopes. Participants who reacted positively in the peptide 11 ELISA were found to have an increased likelihood of being in the sequelae cohort compared to the infertile cohort with an odds ratio of 16.3 (95% c.i. 1.65 – 160), with 95% specificity and 46% sensitivity (0.19-0.74). The peptide 11 ELISA has the potential to be further developed as a screening tool for use during the early IVF work up and provides proof of concept that there may be further peptide antigens which could be identified using bioinformatics and screening approaches.
Resumo:
Railway Bridges deteriorate over time due to different critical factors including, flood, wind, earthquake, collision, and environment factors, such as corrosion, wear, termite attack, etc. In current practice, the contributions of the critical factors, towards the deterioration of railway bridges, which show their criticalities, are not appropriately taken into account. In this paper, a new method for quantifying the criticality of these factors will be introduced. The available knowledge as well as risk analyses conducted in different Australian standards and developed for bridge-design will be adopted. The analytic hierarchy process (AHP) is utilized for prioritising the factors. The method is used for synthetic rating of railway bridges developed by the authors of this paper. Enhancing the reliability of predicting the vulnerability of railway bridges to the critical factors, will be the significant achievement of this research.
Resumo:
A rapid electrochemical method based on using a clean hydrogen-bubble template to form a bimetallic porous honeycomb Cu/Pd structure has been investigated. The addition of palladium salt to a copper-plating bath under conditions of vigorous hydrogen evolution was found to influence the pore size and bulk concentration of copper and palladium in the honeycomb bimetallic structure. The surface was characterised by X-ray photoelectron spectroscopy, which revealed that the surface of honeycomb Cu/Pd was found to be rich with a Cu/Pd alloy. The inclusion of palladium in the bimetallic structure not only influenced the pore size, but also modified the dendritic nature of the internal wall structure of the parent copper material into small nanometre-sized crystallites. The chemical composition of the bimetallic structure and substantial morphology changes were found to significantly influence the surface-enhanced Raman spectroscopic response for immobilised rhodamine B and the hydrogen-evolution reaction. The ability to create free-standing films of this honeycomb material may also have many advantages in the areas of gas- and liquid-phase heterogeneous catalysis.
Resumo:
Stress corrosion cracking (SCC) is a well known form of environmental attack in low carat gold jewellery. It is desirable to have a quick, easy and cost effective way to detect SCC in alloys and prevent them from being used and later failing in their application. A facile chemical method to investigate SCC of 9 carat gold alloys is demonstrated. It involves a simple application of tensile stress to a wire sample in a corrosive environment such as 1–10 % FeCl3 which induces failure in less than 5 minutes. In this study three quaternary (Au, Ag, Cu and Zn) 9 carat gold alloy compositions were investigated for their resistance to SCC and the relationship between time to failure and processing conditions is studied. It is envisaged that the use of such a rapid and facile screening procedure at the production stage may readily identify alloy treatments that produce jewellery that will be susceptible to SCC in its lifetime.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
A method for producing particles having at least regions of at least one metal oxide having nano-sized grains comprises providing particles of material having an initial, non-equiaxed particle shape, making a mixture of the particles of material and one or more precursors of the metal oxide, and treating the mixture such that the one or more precursors of the metal oxide react with the particles of material to thereby form at least regions of metal oxide on or within the particles, wherein atoms from the particles of material form part of a matrix of the at least one metal oxide and the at least one metal oxide has nano-sized grains and wherein at least some of the regions of metal oxide on or within the particles have a non-equiaxed grain shape.
Resumo:
A method for forming a material comprising a metal oxide supported on a support particle comprising the steps of: (a) providing a precursor mixt. comprising a soln. contg. one or more metal cations and (i) a surfactant; or (ii) a hydrophilic polymer; said precursor mixt. further including support particles; and (b) treating the precursor mixt. from (a) above by heating to remove the surfactant or hydrophilic polymer and form metal oxide having nanosized grains, wherein at least some of the metal oxide formed in step (b) is deposited on or supported by the support particles and the metal oxide has an oxide matrix that includes metal atoms derived solely from sources other than the support particles. The disclosure and examples pertain to emission control catalysts. [on SciFinder(R)]
Resumo:
Purpose Videokeratoscopy images can be used for the non-invasive assessment of the tear film. In this work the applicability of an image processing technique, textural-analysis, for the assessment of the tear film in Placido disc images has been investigated. Methods In the presence of tear film thinning/break-up, the reflected pattern from the videokeratoscope is disturbed in the region of tear film disruption. Thus, the Placido pattern carries information about the stability of the underlying tear film. By characterizing the pattern regularity, the tear film quality can be inferred. In this paper, a textural features approach is used to process the Placido images. This method provides a set of texture features from which an estimate of the tear film quality can be obtained. The method is tested for the detection of dry eye in a retrospective dataset from 34 subjects (22-normal and 12-dry eye), with measurements taken under suppressed blinking conditions. Results To assess the capability of each texture-feature to discriminate dry eye from normal subjects, the receiver operating curve (ROC) was calculated and the area under the curve (AUC), specificity and sensitivity extracted. For the different features examined, the AUC value ranged from 0.77 to 0.82, while the sensitivity typically showed values above 0.9 and the specificity showed values around 0.6. Overall, the estimated ROCs indicate that the proposed technique provides good discrimination performance. Conclusions Texture analysis of videokeratoscopy images is applicable to study tear film anomalies in dry eye subjects. The proposed technique appears to have demonstrated its clinical relevance and utility.
Resumo:
This paper proposes a new iterative method to achieve an optimally fitting plate for preoperative planning purposes. The proposed method involves integration of four commercially available software tools, Matlab, Rapidform2006, SolidWorks and ANSYS, each performing specific tasks to obtain a plate shape that fits optimally for an individual tibia and is mechanically safe. A typical challenge when crossing multiple platforms is to ensure correct data transfer. We present an example of the implementation of the proposed method to demonstrate successful data transfer between the four platforms and the feasibility of the method.
Resumo:
Trees are capable of portraying the semi-structured data which is common in web domain. Finding similarities between trees is mandatory for several applications that deal with semi-structured data. Existing similarity methods examine a pair of trees by comparing through nodes and paths of two trees, and find the similarity between them. However, these methods provide unfavorable results for unordered tree data and result in yielding NP-hard or MAX-SNP hard complexity. In this paper, we present a novel method that encodes a tree with an optimal traversing approach first, and then, utilizes it to model the tree with its equivalent matrix representation for finding similarity between unordered trees efficiently. Empirical analysis shows that the proposed method is able to achieve high accuracy even on the large data sets.
Resumo:
Conditions of bridges deteriorate with age, due to different critical factors including, changes in loading, fatigue, environmental effects and natural events. In order to rate a network of bridges, based on their structural condition, the condition of the components of a bridge and their effects on behaviour of the bridge should be reliably estimated. In this paper, a new method for quantifying the criticality and vulnerability of the components of the railway bridges in a network will be introduced. The type of structural analyses for identifying the criticality of the components for carrying train loads will be determined. In addition to that, the analytical methods for identifying the vulnerability of the components to natural events whose probability of occurrence is important, such as, flood, wind, earthquake and collision will be determined. In order to maintain the practicality of this method to be applied to a network of thousands of railway bridges, the simplicity of structural analysis has been taken into account. Demand by capacity ratios of the components at both safety and serviceability condition states as well as weighting factors used in current bridge management systems (BMS) are taken into consideration. It will be explained what types of information related to the structural condition of a bridge is required to be obtained, recorded and analysed. The authors of this paper will use this method in a new rating system introduced previously. Enhancing accuracy and reliability of evaluating and predicting the vulnerability of railway bridges to environmental effects and natural events will be the significant achievement of this research.