984 resultados para natural classification
Resumo:
Isofraxidin is one of the main bioactive constituents in the root of Acanthopanax senticosus, which has antifatigue, antistress, and immuno-accommondating effects. In this study, an ultraperformance LC (UPLC)-ESI MS method was developed for analyzing isofraxidin and its metabolites in rat plasma. The analysis was performed on a UPLC coupled with ESI MS (quadropole MS tandem TOF MS). The lower LOD (LLOD) for isofraxidin was 0.25 ng/mL, the intraday precision was less than 10%, the interday precision was less than 10%, and the extraction recovery was more than 80%. Isofraxidin and two metabolites (M1 and M2) were detected in rat plasma after oral administration of isofraxidin, and the molecular polarities of M1 and M2 were both increased compared to isofraxidin. The metabolites were identified as 5,6-dihydroxyl-7-methoxycoumarin and 5-hydroxyl-6,7-dimethoxycoumarin when subjected to parent ion spectra, product ion spectra, and extract mass and element composition analyses.
Resumo:
Natural convection thermal boundary layer adjacent to the heated inclined wall of a right angled triangle with an adiabatic fin attached to that surface is investigated by numerical simulations. The finite volume based unsteady numerical model is adopted for the simulation. It is revealed from the numerical results that the development of the boundary layer along the inclined surface is characterized by three distinct stages, i.e. a start-up stage, a transitional stage and a steady stage. These three stages can be clearly identified from the numerical simulations. Moreover, in presence of adiabatic fin, the thermal boundary layer adjacent to the inclined wall breaks initially. However, it is reattached with the downstream boundary layer next to the fin. More attention has been given to the boundary layer development near the fin area.
Resumo:
Internationally, transit oriented development (TOD) is characterised by moderate to high density development with diverse land use patterns and well connected street networks centred around high frequency transit stops (bus and rail). Although different TOD typologies have been developed in different contexts, they are based on subjective evaluation criteria derived from the context in which they are built and typically lack a validation measure. Arguably there exist sets of TOD characteristics that perform better in certain contexts, and being able to optimise TOD effectiveness would facilitate planning and supporting policy development. This research utilises data from census collection districts (CCDs) in Brisbane with different sets of TOD attributes measured across six objectively quantified built environmental indicators: net employment density, net residential density, land use diversity, intersection density, cul-de-sac density, and public transport accessibility. Using these measures, a Two Step Cluster Analysis was conducted to identify natural groupings of the CCDs with similar profiles, resulting in four unique TOD clusters: (a) residential TODs, (b) activity centre TODs, (c) potential TODs, and; (d) TOD non-suitability. The typologies are validated by estimating a multinomial logistic regression model in order to understand the mode choice behaviour of 10,013 individuals living in these areas. Results indicate that in comparison to people living in areas classified as residential TODs, people who reside in non-TOD clusters were significantly less likely to use public transport (PT) (1.4 times), and active transport (4 times) compared to the car. People living in areas classified as potential TODs were 1.3 times less likely to use PT, and 2.5 times less likely to use active transport compared to using the car. Only a little difference in mode choice behaviour was evident between people living in areas classified as residential TODs and activity centre TODs. The results suggest that: (a) two types of TODs may be suitable for classification and effect mode choice in Brisbane; (b) TOD typology should be developed based on their TOD profile and performance matrices; (c) both bus stop and train station based TODs are suitable for development in Brisbane.
Resumo:
We exploit a voting reform in France to estimate the causal effect of exit poll information on turnout and bandwagon voting. Before the change in legislation, individuals in some French overseas territories voted after the election result had already been made public via exit poll information from mainland France. We estimate that knowing the exit poll information decreases voter turnout by about 12 percentage points. Our study is the first clean empirical design outside of the laboratory to demonstrate the effect of such knowledge on voter turnout. Furthermore, we find that exit poll information significantly increases bandwagon voting; that is, voters who choose to turn out are more likely to vote for the expected winner.
Resumo:
This chapter presents the preliminary results of a phenomenographic study aimed at exploring people’s experience of information literacy during the 2011 flood in Brisbane, Queensland. Phenomenography is a qualitative, interpretive and descriptive approach to research that explores the different ways in which people experience various phenomena and situations in the world around them. In this study, semi-structured interviews with seven adult residents of Brisbane suggested six categories that depicted different ways people experienced information literacy during this natural disaster. Access to timely, accurate and credible information during a natural disaster can save lives, safeguard property, and reduce fear and anxiety, however very little is currently known about citizens’ information literacy during times of natural disaster. Understanding how people use information to learn during times of crisis is a new terrain for community information literacy research, and one that warrants further attention by the information research community and the emergency management sector.
Resumo:
Over the past decade, social media have gone through a process of legitimation and official adoption, and they are now becoming embedded as part of the official communications apparatus of many commercial and public-sector organisations— in turn, providing platforms like Twitter with their own sources of legitimacy. Arguably, the demonstrated utility of social media platforms and tools in times of crisis—from civil unrest and violent crime through to natural disasters like bushfires, earthquakes, and floods—has been a crucial driver of this newfound legitimacy. In the mid-2000s, user-created content and ‘Web 2.0’ platforms were known to play a role in crisis communication; back then, the involvement of extra-institutional actors in providing and sharing information around such events involved distributed, ad hoc, or niche platforms (like Flickr), and was more likely to be framed as ‘citizen journalism’ or ‘crowdsourcing’ (see, for example, Liu, Palen, Sutton, Hughes, & Vieweg, 2008, on the then-emerging role of photo-sharing in disasters). Since then, the dramatically increased take-up of mainstream social media platforms like Facebook and Twitter means that the pool of potential participants in online crisis communication has broadened to include a much larger proportion of the general population, as well as traditional media and official emergency response organisations.
Resumo:
A focused library based on the marine natural products polyandrocarpamines A (1) and B (2) has been designed and synthesised using parallel solution-phase chemistry. In silico physicochemical property calculations were performed on synthetic candidates in order to optimise the library for drug discovery and chemical biology. A library of ten 2-aminoimidazolone products (3–12) was prepared by coupling glycocyamidine and a variety of aldehydes using a one-step stereoselective aldol condensation reaction under microwave conditions. All analogues were characterised by NMR, UV, IR and MS. The library was evaluated for cytotoxicity towards the prostate cancer cell lines, LNCaP, PC-3 and 22Rv1.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
This paper presents large, accurately calibrated and time-synchronised datasets, gathered outdoors in controlled environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. It discusses how the data collection process was designed, the conditions in which these datasets have been gathered, and some possible outcomes of their exploitation, in particular for the evaluation of performance of sensors and perception algorithms for UGVs.
Resumo:
Facilitated discussion with early childhood staff working with children and families affected by natural disasters in Queensland, Australia, raises issues regarding educational communication in emergencies. This paper reports on these discussions as ‘reflections on talk’. It examines discrepancies between the literature and staff talk, gaps in the literature, and the inaccessible style of some literature-demanded collaborative debate and information re-interpretation. Reframing of the discourse style was used to support staff de-briefing, mutual encouragement, and sharing of insights on promoting resilience in children and families. Formal investigation is required regarding effective emergency-situation talk between staff, as well as with children and families.
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.