252 resultados para Extract


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero-crossings of the wavelet transform are used to extract the unique features obtained from the grey-level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one-dimensional representation of the grey-level profiles of the iris, followed by obtaining the wavelet transform zerocrossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera-to-face distance. The technique has been tested on real images in both noise-free and noisy conditions. The technique is being investigated for real-time implementation, as a stand-alone system, for access control to high-security areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the Web 2.0 has provided considerable facilities for people to create, share and exchange information and ideas. Upon this, the user generated content, such as reviews, has exploded. Such data provide a rich source to exploit in order to identify the information associated with specific reviewed items. Opinion mining has been widely used to identify the significant features of items (e.g., cameras) based upon user reviews. Feature extraction is the most critical step to identify useful information from texts. Most existing approaches only find individual features about a product without revealing the structural relationships between the features which usually exist. In this paper, we propose an approach to extract features and feature relationships, represented as a tree structure called feature taxonomy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature taxonomy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that our proposed approach is able to capture the product features and relations effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As of today, opinion mining has been widely used to iden- tify the strength and weakness of products (e.g., cameras) or services (e.g., services in medical clinics or hospitals) based upon people's feed- back such as user reviews. Feature extraction is a crucial step for opinion mining which has been used to collect useful information from user reviews. Most existing approaches only find individual features of a product without the structural relationships between the features which usually exists. In this paper, we propose an approach to extract features and feature relationship, represented as tree structure called a feature hi- erarchy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature hierarchy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that the proposed feature extraction approach can identify more correct features than the baseline model. Even though the datasets used in the experiment are about cameras, our work can be ap- plied to generate features about a service such as the services in hospitals or clinics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Guaranteeing the quality of extracted features that describe relevant knowledge to users or topics is a challenge because of the large number of extracted features. Most popular existing term-based feature selection methods suffer from noisy feature extraction, which is irrelevant to the user needs (noisy). One popular method is to extract phrases or n-grams to describe the relevant knowledge. However, extracted n-grams and phrases usually contain a lot of noise. This paper proposes a method for reducing the noise in n-grams. The method first extracts more specific features (terms) to remove noisy features. The method then uses an extended random set to accurately weight n-grams based on their distribution in the documents and their terms distribution in n-grams. The proposed approach not only reduces the number of extracted n-grams but also improves the performance. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms the state-of-art methods underpinned by Okapi BM25, tf*idf and Rocchio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely acknowledged that effective asset management requires an interdisciplinary approach, in which synergies should exist between traditional disciplines such as: accounting, engineering, finance, humanities, logistics, and information systems technologies. Asset management is also an important, yet complex business practice. Business process modelling is proposed as an approach to manage the complexity of asset management through the modelling of asset management processes. A sound foundation for the systematic application and analysis of business process modelling in asset management is, however, yet to be developed. Fundamentally, a business process consists of activities (termed functions), events/states, and control flow logic. As both events/states and control flow logic are somewhat dependent on the functions themselves, it is a logical step to first identify the functions within a process. This research addresses the current gap in knowledge by developing a method to identify functions common to various industry types (termed core functions). This lays the foundation to extract such functions, so as to identify both commonalities and variation points in asset management processes. This method describes the use of a manual text mining and a taxonomy approach. An example is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intrinsic or acquired resistance to chemotherapeutic agents is a common phenomenon and a major challenge in the treatment of cancer patients. Chemoresistance is defined by a complex network of factors including multi-drug resistance proteins, reduced cellular uptake of the drug, enhanced DNA repair, intracellular drug inactivation, and evasion of apoptosis. Pre-clinical models have demonstrated that many chemotherapy drugs, such as platinum-based agents, antracyclines, and taxanes, promote the activation of the NF-κB pathway. NF-κB is a key transcription factor, playing a role in the development and progression of cancer and chemoresistance through the activation of a multitude of mediators including anti-apoptotic genes. Consequently, NF-κB has emerged as a promising anti-cancer target. Here, we describe the role of NF-κB in cancer and in the development of resistance, particularly cisplatin. Additionally, the potential benefits and disadvantages of targeting NF-κB signaling by pharmacological intervention will be addressed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is part of a series of chemical investigations of the genus Grevillea. Two new arbutin derivatives, seven new bisresorcinols, including a mixture of two isomers, three known flavonol glycosides, and four known resorcinols, including a mixture of two homologous compounds, were isolated from the ethyl acetate extract of the leaves and methanol extract of the stems of Grevillea banksii. The new compounds were identified, on the basis of spectroscopic data, as 6'-O-(3-(2(hydroxymethyl)acryloyloxy)-2-methylpropanoyl)arbutin (1), 6'-O-(2-methylacryloyl)arbutin (2), 5,5'-(4(Z)-dodecen-1,12diyl)bisresorcinol (6), 2'-methyl-5,5'-(4(Z)-tetradecen-1,14-diyl)bisresorcinol (8), 2,2'-di(4-hydroxyprenyl)-5,5'-(6(Z)-tetradecen-1,14-diyl)bisresorcinol (9), 2-(4-acetoxyprenyl)-2'-(4-hydroxyprenyl) 5,5'-(6(Z)-tetradecen-1,14-diyl)bisresorcinol (10), 2-(4-acetoxyprenyl)-2'-(4-hydroxyprenyl)5,5'-(8(Z)-tetradecen-l,14-diyl)bisresorcinol (11), 5,5'-(10(Z)-tetradecen-1-on-diyl)bisresorcinol (12) and 5,5'-(4(Z)-tetradecen-1-on-diyl)bisresorcinol (13).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sustainability has become one of the important research topics in the field of Human Computer Interaction (HCI). However, the majority of work has focused on the Western culture. In this paper, we explore sustainable household practices in the developing world. Our research draws on the results from an ethnographic field study of household women belonging to the so-called middle class in India. We analyze our results in the context of Blevis' [4] principles of sustainable interaction design (established within the Western culture), to extract the intercultural aspects that need to be considered for designing technologies. We present examples from the field that we term "domestic artefacts". Domestic artefacts represent creative and sustainable ways household women appropriate and adapt used objects to create more useful and enriching objects that support household members' everyday activities. Our results show that the rationale behind creating domestic artefacts is not limited to the practicality and usefulness, but it shows how religious beliefs, traditions, family intimacy, personal interests and health issues are incorporated into them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Twitter is the focus of much research attention, both in traditional academic circles and in commercial market and media research, as analytics give increasing insight into the performance of the platform in areas as diverse as political communication, crisis management, television audiencing and other industries. While methods for tracking Twitter keywords and hashtags have developed apace and are well documented, the make-up of the Twitter user base and its evolution over time have been less understood to date. Recent research efforts have taken advantage of functionality provided by Twitter's Application Programming Interface to develop methodologies to extract information that allows us to understand the growth of Twitter, its geographic spread and the processes by which particular Twitter users have attracted followers. From politicians to sporting teams, and from YouTube personalities to reality television stars, this technique enables us to gain an understanding of what prompts users to follow others on Twitter. This article outlines how we came upon this approach, describes the method we adopted to produce accession graphs and discusses their use in Twitter research. It also addresses the wider ethical implications of social network analytics, particularly in the context of a detailed study of the Twitter user base.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parabolic trough concentrator collector is the most matured, proven and widespread technology for the exploitation of the solar energy on a large scale for middle temperature applications. The assessment of the opportunities and the possibilities of the collector system are relied on its optical performance. A reliable Monte Carlo ray tracing model of a parabolic trough collector is developed by using Zemax software. The optical performance of an ideal collector depends on the solar spectral distribution and the sunshape, and the spectral selectivity of the associated components. Therefore, each step of the model, including the spectral distribution of the solar energy, trough reflectance, glazing anti-reflection coating and the absorber selective coating is explained and verified. Radiation flux distribution around the receiver, and the optical efficiency are two basic aspects of optical simulation are calculated using the model, and verified with widely accepted analytical profile and measured values respectively. Reasonably very good agreement is obtained. Further investigations are carried out to analyse the characteristics of radiation distribution around the receiver tube at different insolation, envelop conditions, and selective coating on the receiver; and the impact of scattered light from the receiver surface on the efficiency. However, the model has the capability to analyse the optical performance at variable sunshape, tracking error, collector imperfections including absorber misalignment with focal line and de-focal effect of the absorber, different rim angles, and geometric concentrations. The current optical model can play a significant role in understanding the optical aspects of a trough collector, and can be employed to extract useful information on the optical performance. In the long run, this optical model will pave the way for the construction of low cost standalone photovoltaic and thermal hybrid collector in Australia for small scale domestic hot water and electricity production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The position(s) of carbon-carbon double bonds within lipids can dramatically affect their structure and reactivity and thus has a direct bearing on biological function. Commonly employed mass spectrometric approaches to the characterization of complex lipids, however, fail to localize sites of unsaturation within the molecular structure and thus cannot distinguish naturally occurring regioisomers. In a recent communication \[Thomas, M. C.; Mitchell, T. W.; Blanksby, S. J. J. Am. Chem. Soc. 2006, 128, 58-59], we have presented a new technique for the elucidation of double bond position in glycerophospholipids using ozone-induced fragmentation within the source of a conventional electrospray ionization mass spectrometer. Here we report the on-line analysis, using ozone electrospray mass spectrometry (OzESI-MS), of a broad range of common unsaturated lipids including acidic and neutral glycerophospholipids, sphingomyelins, and triacylglycerols. All lipids analyzed are found to form a pair of chemically induced fragment ions diagnostic of the position of each double bond(s) regardless of the polarity, the number of charges, or the adduction (e.g., \[M - H](-), \[M - 2H](2-), \[M + H](+), \[M + Na](+), \[M + NH4](+)). The ability of OzESI-MS to distinguish lipids that differ only in the position of the double bonds is demonstrated using the glycerophosphocholine standards, GPCho(9Z-18:1/9Z-18:1) and GPCho(6Z-18:1/6Z-18:1). While these regioisomers cannot be differentiated by their conventional tandem mass spectra, the OzESI-MS spectra reveal abundant fragment ions of distinctive mass-to-charge ratio (m/z). The approach is found to be sufficiently robust to be used in conjunction with the m/z 184 precursor ion scans commonly employed for the identification of phosphocholine-containing lipids in shotgun lipidomic analyses. This tandem OzESI-MS approach was used, in conjunction with conventional tandem mass spectral analysis, for the structural characterization of an unknown sphingolipid in a crude lipid extract obtained from a human lens. The OzESI-MS data confirm the presence of two regioisomers, namely, SM(d18:0/15Z-24:1) and SM(d18:0/17Z-24:1), and suggest the possible presence of a third isomer, SM(d18:0/19Z-24:1), in lower abundance. The data presented herein demonstrate that OzESI-MS is a broadly applicable, on-line approach for structure determination and, when used in conjunction with established tandem mass spectrometric methods, can provide near complete structural characterization of a range of important lipid classes. As such, OzESI-MS may provide important new insight into the molecular diversity of naturally occurring lipids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ions formed from lipids during electrospray ionization of crude lipid extracts have been mass-selected within a quadrupole linear ion trap mass spectrometer and allowed to react with ozone vapor. Gas-phase ion-molecule reactions between unsaturated lipid ions and ozone are found to yield two primary product ions for each carbon-carbon double bond within the molecule. The mass-to-charge ratios of these chemically induced fragments are diagnostic of the position of unsaturation within the precursor ion. This novel analytical technique, dubbed ozone-induced dissociation (OzID), can be applied both in series and in parallel with conventional collision-induced dissociation (CID) to provide near-complete structural assignment of unknown lipids within complex mixtures without prior fractionation or derivatization. In this study, OzID is applied to a suite of complex lipid extracts from sources including human lens, bovine kidney, and commercial olive oil, thus demonstrating the technique to be applicable to a broad range of lipid classes including both neutral and acidic glycerophospholipids, sphingomyelins, and triacylglycerols. Gas-phase ozonolysis reactions are also observed with different types of precursor ions including \[M + H](+), \[M + Li](+), \[M + Na](+), and \[M H](-): in each case yielding fragmentation data that allow double bond position to be unambiguously assigned. Within the human lens lipid extract, three sphingomyelin regioisomers, namely SM(d18:0/15Z-24:1), SM(d18:0/17Z-24:1), and SM(d18:0/19Z-24:1), and a novel phosphatidylethanolamine alkyl ether, GPEtn(11Z-18:1e/9Z18:1), are identified using a combination of CID and OzID. These discoveries demonstrate that lipid identification based on CID alone belies the natural structural diversity in lipid biochemistry and illustrate the potential of OzID as a complementary approach within automated, high-throughput lipid analysis protocols.