245 resultados para Ananas sativus extract
Resumo:
Cable structures find many applications such as in power transmission, in anchors and especially in bridges. They serve as major load bearing elements in suspension bridges, which are capable of spanning long distances. All bridges, including suspension bridges, are designed to have long service lives. However, during this long life, they become vulnerable to damage due to changes in loadings, deterioration with age and random action such as impacts. The main cables are more vulnerable to corrosion and fatigue, compared to the other bridge components, and consequently reduces the serviceability and ultimate capacity of the bridge. Detecting and locating such damage at the earliest stage is challenging in the current structural health monitoring (SHM) systems of long span suspension bridges. Damage or deterioration of a structure alters its stiffness, mass and damping properties which in turn modify its vibration characteristics. This phenomenon can therefore be used to detect damage in a structure. The modal flexibility, which depends on the vibration characteristics of a structure, has been identified as a successful damage indicator in beam and plate elements, trusses and simple structures in reinforced concrete and steel. Successful application of the modal flexibility phenomenon to detect and locate the damage in suspension bridge main cables has received limited attention in recent research work. This paper, therefore examines the potential of the modal flexibility based Damage Index (DI) for detecting and locating damage in the main cable of a suspension bridge under four different damage scenarios. Towards this end, a numerical model of a suspension bridge cable was developed to extract the modal parameters at both damaged and undamaged states. Damage scenarios considered in this study with varied location and severity were simulated by changing stiffness at particular locations of the cable model. Results confirm that the DI has the potential to successfully detect and locate damage in suspension bridge main cables. This simple method can therefore enable bridge engineers and managers to detect and locate damage in suspension bridges at an early stage, minimize expensive retrofitting and prevent bridge collapse.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero-crossings of the wavelet transform are used to extract the unique features obtained from the grey-level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one-dimensional representation of the grey-level profiles of the iris, followed by obtaining the wavelet transform zerocrossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera-to-face distance. The technique has been tested on real images in both noise-free and noisy conditions. The technique is being investigated for real-time implementation, as a stand-alone system, for access control to high-security areas.
Resumo:
In recent years, the Web 2.0 has provided considerable facilities for people to create, share and exchange information and ideas. Upon this, the user generated content, such as reviews, has exploded. Such data provide a rich source to exploit in order to identify the information associated with specific reviewed items. Opinion mining has been widely used to identify the significant features of items (e.g., cameras) based upon user reviews. Feature extraction is the most critical step to identify useful information from texts. Most existing approaches only find individual features about a product without revealing the structural relationships between the features which usually exist. In this paper, we propose an approach to extract features and feature relationships, represented as a tree structure called feature taxonomy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature taxonomy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that our proposed approach is able to capture the product features and relations effectively.
Resumo:
As of today, opinion mining has been widely used to iden- tify the strength and weakness of products (e.g., cameras) or services (e.g., services in medical clinics or hospitals) based upon people's feed- back such as user reviews. Feature extraction is a crucial step for opinion mining which has been used to collect useful information from user reviews. Most existing approaches only find individual features of a product without the structural relationships between the features which usually exists. In this paper, we propose an approach to extract features and feature relationship, represented as tree structure called a feature hi- erarchy, based on frequent patterns and associations between patterns derived from user reviews. The generated feature hierarchy profiles the product at multiple levels and provides more detailed information about the product. Our experiment results based on some popularly used review datasets show that the proposed feature extraction approach can identify more correct features than the baseline model. Even though the datasets used in the experiment are about cameras, our work can be ap- plied to generate features about a service such as the services in hospitals or clinics.
Resumo:
Guaranteeing the quality of extracted features that describe relevant knowledge to users or topics is a challenge because of the large number of extracted features. Most popular existing term-based feature selection methods suffer from noisy feature extraction, which is irrelevant to the user needs (noisy). One popular method is to extract phrases or n-grams to describe the relevant knowledge. However, extracted n-grams and phrases usually contain a lot of noise. This paper proposes a method for reducing the noise in n-grams. The method first extracts more specific features (terms) to remove noisy features. The method then uses an extended random set to accurately weight n-grams based on their distribution in the documents and their terms distribution in n-grams. The proposed approach not only reduces the number of extracted n-grams but also improves the performance. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms the state-of-art methods underpinned by Okapi BM25, tf*idf and Rocchio.
Resumo:
It is widely acknowledged that effective asset management requires an interdisciplinary approach, in which synergies should exist between traditional disciplines such as: accounting, engineering, finance, humanities, logistics, and information systems technologies. Asset management is also an important, yet complex business practice. Business process modelling is proposed as an approach to manage the complexity of asset management through the modelling of asset management processes. A sound foundation for the systematic application and analysis of business process modelling in asset management is, however, yet to be developed. Fundamentally, a business process consists of activities (termed functions), events/states, and control flow logic. As both events/states and control flow logic are somewhat dependent on the functions themselves, it is a logical step to first identify the functions within a process. This research addresses the current gap in knowledge by developing a method to identify functions common to various industry types (termed core functions). This lays the foundation to extract such functions, so as to identify both commonalities and variation points in asset management processes. This method describes the use of a manual text mining and a taxonomy approach. An example is presented.
Resumo:
Intrinsic or acquired resistance to chemotherapeutic agents is a common phenomenon and a major challenge in the treatment of cancer patients. Chemoresistance is defined by a complex network of factors including multi-drug resistance proteins, reduced cellular uptake of the drug, enhanced DNA repair, intracellular drug inactivation, and evasion of apoptosis. Pre-clinical models have demonstrated that many chemotherapy drugs, such as platinum-based agents, antracyclines, and taxanes, promote the activation of the NF-κB pathway. NF-κB is a key transcription factor, playing a role in the development and progression of cancer and chemoresistance through the activation of a multitude of mediators including anti-apoptotic genes. Consequently, NF-κB has emerged as a promising anti-cancer target. Here, we describe the role of NF-κB in cancer and in the development of resistance, particularly cisplatin. Additionally, the potential benefits and disadvantages of targeting NF-κB signaling by pharmacological intervention will be addressed.
Resumo:
Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.
Resumo:
This work is part of a series of chemical investigations of the genus Grevillea. Two new arbutin derivatives, seven new bisresorcinols, including a mixture of two isomers, three known flavonol glycosides, and four known resorcinols, including a mixture of two homologous compounds, were isolated from the ethyl acetate extract of the leaves and methanol extract of the stems of Grevillea banksii. The new compounds were identified, on the basis of spectroscopic data, as 6'-O-(3-(2(hydroxymethyl)acryloyloxy)-2-methylpropanoyl)arbutin (1), 6'-O-(2-methylacryloyl)arbutin (2), 5,5'-(4(Z)-dodecen-1,12diyl)bisresorcinol (6), 2'-methyl-5,5'-(4(Z)-tetradecen-1,14-diyl)bisresorcinol (8), 2,2'-di(4-hydroxyprenyl)-5,5'-(6(Z)-tetradecen-1,14-diyl)bisresorcinol (9), 2-(4-acetoxyprenyl)-2'-(4-hydroxyprenyl) 5,5'-(6(Z)-tetradecen-1,14-diyl)bisresorcinol (10), 2-(4-acetoxyprenyl)-2'-(4-hydroxyprenyl)5,5'-(8(Z)-tetradecen-l,14-diyl)bisresorcinol (11), 5,5'-(10(Z)-tetradecen-1-on-diyl)bisresorcinol (12) and 5,5'-(4(Z)-tetradecen-1-on-diyl)bisresorcinol (13).
Resumo:
Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.
Resumo:
For clinical use, in electrocardiogram (ECG) signal analysis it is important to detect not only the centre of the P wave, the QRS complex and the T wave, but also the time intervals, such as the ST segment. Much research focused entirely on qrs complex detection, via methods such as wavelet transforms, spline fitting and neural networks. However, drawbacks include the false classification of a severe noise spike as a QRS complex, possibly requiring manual editing, or the omission of information contained in other regions of the ECG signal. While some attempts were made to develop algorithms to detect additional signal characteristics, such as P and T waves, the reported success rates are subject to change from person-to-person and beat-to-beat. To address this variability we propose the use of Markov-chain Monte Carlo statistical modelling to extract the key features of an ECG signal and we report on a feasibility study to investigate the utility of the approach. The modelling approach is examined with reference to a realistic computer generated ECG signal, where details such as wave morphology and noise levels are variable.
Resumo:
Sustainability has become one of the important research topics in the field of Human Computer Interaction (HCI). However, the majority of work has focused on the Western culture. In this paper, we explore sustainable household practices in the developing world. Our research draws on the results from an ethnographic field study of household women belonging to the so-called middle class in India. We analyze our results in the context of Blevis' [4] principles of sustainable interaction design (established within the Western culture), to extract the intercultural aspects that need to be considered for designing technologies. We present examples from the field that we term "domestic artefacts". Domestic artefacts represent creative and sustainable ways household women appropriate and adapt used objects to create more useful and enriching objects that support household members' everyday activities. Our results show that the rationale behind creating domestic artefacts is not limited to the practicality and usefulness, but it shows how religious beliefs, traditions, family intimacy, personal interests and health issues are incorporated into them.
Resumo:
Twitter is the focus of much research attention, both in traditional academic circles and in commercial market and media research, as analytics give increasing insight into the performance of the platform in areas as diverse as political communication, crisis management, television audiencing and other industries. While methods for tracking Twitter keywords and hashtags have developed apace and are well documented, the make-up of the Twitter user base and its evolution over time have been less understood to date. Recent research efforts have taken advantage of functionality provided by Twitter's Application Programming Interface to develop methodologies to extract information that allows us to understand the growth of Twitter, its geographic spread and the processes by which particular Twitter users have attracted followers. From politicians to sporting teams, and from YouTube personalities to reality television stars, this technique enables us to gain an understanding of what prompts users to follow others on Twitter. This article outlines how we came upon this approach, describes the method we adopted to produce accession graphs and discusses their use in Twitter research. It also addresses the wider ethical implications of social network analytics, particularly in the context of a detailed study of the Twitter user base.
Resumo:
Parabolic trough concentrator collector is the most matured, proven and widespread technology for the exploitation of the solar energy on a large scale for middle temperature applications. The assessment of the opportunities and the possibilities of the collector system are relied on its optical performance. A reliable Monte Carlo ray tracing model of a parabolic trough collector is developed by using Zemax software. The optical performance of an ideal collector depends on the solar spectral distribution and the sunshape, and the spectral selectivity of the associated components. Therefore, each step of the model, including the spectral distribution of the solar energy, trough reflectance, glazing anti-reflection coating and the absorber selective coating is explained and verified. Radiation flux distribution around the receiver, and the optical efficiency are two basic aspects of optical simulation are calculated using the model, and verified with widely accepted analytical profile and measured values respectively. Reasonably very good agreement is obtained. Further investigations are carried out to analyse the characteristics of radiation distribution around the receiver tube at different insolation, envelop conditions, and selective coating on the receiver; and the impact of scattered light from the receiver surface on the efficiency. However, the model has the capability to analyse the optical performance at variable sunshape, tracking error, collector imperfections including absorber misalignment with focal line and de-focal effect of the absorber, different rim angles, and geometric concentrations. The current optical model can play a significant role in understanding the optical aspects of a trough collector, and can be employed to extract useful information on the optical performance. In the long run, this optical model will pave the way for the construction of low cost standalone photovoltaic and thermal hybrid collector in Australia for small scale domestic hot water and electricity production.