30 resultados para classification and equivalence classes
Statistical evaluation of the fixed concentration procedure for acute inhalation toxicity assessment
Resumo:
The conventional method for the assessment of acute inhalation toxicity (OECD Test Guideline 403, 1981) uses death of animals as an endpoint to identify the median lethal concentration (LC50). A new OECD Testing Guideline called the Fixed Concentration Procedure (FCP) is being prepared to provide an alternative to Test Guideline 403. Unlike Test Guideline 403, the FCP does not provide a point estimate of the LC50, but aims to identify an airborne exposure level that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonized System of Classification and Labelling scheme (GHS). The FCP has been validated using statistical simulation rather than byin vivo testing. The statistical simulation approach predicts the GHS classification outcome and the numbers of deaths and animals used in the test for imaginary substances with a range of LC50 values and dose response curve slopes. This paper describes the FCP and reports the results from the statistical simulation study assessing its properties. It is shown that the procedure will be completed with considerably less death and suffering than Test Guideline 403, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LC50 value.
Resumo:
The fixed-dose procedure (FDP) was introduced as OECD Test Guideline 420 in 1992, as an alternative to the conventional median lethal dose (LD50) test for the assessment of acute oral toxicity (OECD Test Guideline 401). The FDP uses fewer animals and causes less suffering than the conventional test, while providing information on the acute toxicity to allow substances to be ranked according to the EU hazard classification system. Recently the FDP has been revised, with the aim of providing further reductions and refinements, and classification according to the criteria of the Globally Harmonized Hazard Classification and Labelling scheme (GHS). This paper describes the revised FDP and analyses its properties, as determined by a statistical modelling approach. The analysis shows that the revised FDP classifies substances for acute oral toxicity generally in the same, or a more stringent, hazard class as that based on the LD50 value, according to either the GHS or the EU classification scheme. The likelihood of achieving the same classification is greatest for substances with a steep dose-response curve and median toxic dose (TD50) close to the LD50. The revised FDP usually requires five or six animals with two or fewer dying as a result of treatment in most cases.
Resumo:
Traffic collisions can be a major source of mortality in wild populations, and animals may be expected to exhibit behavioral mechanisms that reduce the risk associated with crossing roads. Animals living in urban areas in particular have to negotiate very dense road networks, often with high levels of traffic flow. We examined traffic-related mortality of red foxes (Vulpes vulpes) in the city of Bristol, UK, and the extent to which roads affected fox activity by comparing real and randomly generated patterns of movement. There were significant seasonal differences in the number of traffic-related fox deaths for different age and sex classes; peaks were associated with periods when individuals were likely to be moving through unfamiliar terrain and would have had to cross major roads. Mortality rates per unit road length increased with road magnitude. The number of roads crossed by foxes and the rate at which roads were crossed per hour of activity increased after midnight when traffic flow was lower. Adults and juveniles crossed 17% and 30% fewer roads, respectively, than expected from randomly generated movement. This highly mobile species appeared to reduce the mortality risk of minor category roads by changing its activity patterns, but it remained vulnerable to the effects of larger roads with higher traffic flows during periods associated with extraterritorial movements.
Resumo:
Purpose - The purpose of this paper is to provide a quantitative multicriteria decision-making approach to knowledge management in construction entrepreneurship education by means of an analytic knowledge network process (KANP) Design/methodology/approach- The KANP approach in the study integrates a standard industrial classification with the analytic network process (ANP). For the construction entrepreneurship education, a decision-making model named KANP.CEEM is built to apply the KANP method in the evaluation of teaching cases to facilitate the case method, which is widely adopted in entrepreneurship education at business schools. Findings- The study finds that there are eight clusters and 178 nodes in the KANP.CEEM model, and experimental research on the evaluation of teaching cases discloses that the KANP method is effective in conducting knowledge management to the entrepreneurship education. Research limitations/implications- As an experimental research, this paper ignores the concordance between a selected standard classification and others, which perhaps limits the usefulness of KANP.CEEM model elsewhere. Practical implications- As the KANP.CEEM model is built based on the standard classification codes and the embedded ANP, it is thus expected that the model has a wide potential in evaluating knowledge-based teaching materials for any education purpose with a background from the construction industry, and can be used by both faculty and students. Originality/value- This paper fulfils a knowledge management need and offers a practical tool for an academic starting out on the development of knowledge-based teaching cases and other teaching materials or for a student going through the case studies and other learning materials.
Resumo:
A unified approach is proposed for sparse kernel data modelling that includes regression and classification as well as probability density function estimation. The orthogonal-least-squares forward selection method based on the leave-one-out test criteria is presented within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic sparse kernel data modelling approach.
Resumo:
A unified approach is proposed for data modelling that includes supervised regression and classification applications as well as unsupervised probability density function estimation. The orthogonal-least-squares regression based on the leave-one-out test criteria is formulated within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic data-modelling approach for constructing parsimonious kernel models with excellent generalisation capability. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.
Resumo:
A new class of shape features for region classification and high-level recognition is introduced. The novel Randomised Region Ray (RRR) features can be used to train binary decision trees for object category classification using an abstract representation of the scene. In particular we address the problem of human detection using an over segmented input image. We therefore do not rely on pixel values for training, instead we design and train specialised classifiers on the sparse set of semantic regions which compose the image. Thanks to the abstract nature of the input, the trained classifier has the potential to be fast and applicable to extreme imagery conditions. We demonstrate and evaluate its performance in people detection using a pedestrian dataset.
Resumo:
A first step in interpreting the wide variation in trace gas concentrations measured over time at a given site is to classify the data according to the prevailing weather conditions. In order to classify measurements made during two intensive field campaigns at Mace Head, on the west coast of Ireland, an objective method of assigning data to different weather types has been developed. Air-mass back trajectories calculated using winds from ECMWF analyses, arriving at the site in 1995–1997, were allocated to clusters based on a statistical analysis of the latitude, longitude and pressure of the trajectory at 12 h intervals over 5 days. The robustness of the analysis was assessed by using an ensemble of back trajectories calculated for four points around Mace Head. Separate analyses were made for each of the 3 years, and for four 3-month periods. The use of these clusters in classifying ground-based ozone measurements at Mace Head is described, including the need to exclude data which have been influenced by local perturbations to the regional flow pattern, for example, by sea breezes. Even with a limited data set, based on 2 months of intensive field measurements in 1996 and 1997, there are statistically significant differences in ozone concentrations in air from the different clusters. The limitations of this type of analysis for classification and interpretation of ground-based chemistry measurements are discussed.
Resumo:
We exploit a theory of price linkages that lends itself readily to empirical examination using Markovchain, Monte Carlo methods. The methodology facilitates classification and discrimination among alternative regimes in economic time series. The theory and procedures are applied to annual series (1955-1992) on the U.S. beef sector
Resumo:
This paper aims to assess the necessity of updating the intensity-duration-frequency (IDF) curves used in Portugal to design building storm-water drainage systems. A comparative analysis of the design was performed for the three predefined rainfall regions in Portugal using the IDF curves currently in use and estimated for future decades. Data for recent and future climate conditions simulated by a global and regional climate model chain are used to estimate possible changes of rainfall extremes and its implications for the drainage systems. The methodology includes the disaggregation of precipitation up to subhourly scales, the robust development of IDF curves, and the correction of model bias. Obtained results indicate that projected changes are largest for the plains in southern Portugal (5–33%) than for mountainous regions (3–9%) and that these trends are consistent with projected changes in the long-term 95th percentile of the daily precipitation throughout the 21st century. The authors conclude there is a need to review the current precipitation regime classification and change the new drainage systems towards larger dimensions to mitigate the projected changes in extreme precipitation.
Resumo:
Sparse coding aims to find a more compact representation based on a set of dictionary atoms. A well-known technique looking at 2D sparsity is the low rank representation (LRR). However, in many computer vision applications, data often originate from a manifold, which is equipped with some Riemannian geometry. In this case, the existing LRR becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to applications. In this paper, we generalize the LRR over the Euclidean space to the LRR model over a specific Rimannian manifold—the manifold of symmetric positive matrices (SPD). Experiments on several computer vision datasets showcase its noise robustness and superior performance on classification and segmentation compared with state-of-the-art approaches.
Resumo:
This special issue is focused on the assessment of algorithms for the observation of Earth’s climate from environ- mental satellites. Climate data records derived by remote sensing are increasingly a key source of insight into the workings of and changes in Earth’s climate system. Producers of data sets must devote considerable effort and expertise to maximise the true climate signals in their products and minimise effects of data processing choices and changing sensors. A key choice is the selection of algorithm(s) for classification and/or retrieval of the climate variable. Within the European Space Agency Climate Change Initiative, science teams undertook systematic assessment of algorithms for a range of essential climate variables. The papers in the special issue report some of these exercises (for ocean colour, aerosol, ozone, greenhouse gases, clouds, soil moisture, sea surface temper- ature and glaciers). The contributions show that assessment exercises must be designed with care, considering issues such as the relative importance of different aspects of data quality (accuracy, precision, stability, sensitivity, coverage, etc.), the availability and degree of independence of validation data and the limitations of validation in characterising some important aspects of data (such as long-term stability or spatial coherence). As well as re- quiring a significant investment of expertise and effort, systematic comparisons are found to be highly valuable. They reveal the relative strengths and weaknesses of different algorithmic approaches under different observa- tional contexts, and help ensure that scientific conclusions drawn from climate data records are not influenced by observational artifacts, but are robust.
Resumo:
Background: Accurate dietary assessment is key to understanding nutrition-related outcomes and is essential for estimating dietary change in nutrition-based interventions. Objective: The objective of this study was to assess the pan-European reproducibility of the Food4Me food-frequency questionnaire (FFQ) in assessing the habitual diet of adults. Methods: Participantsfromthe Food4Me study, a 6-mo,Internet-based, randomizedcontrolled trial of personalized nutrition conducted in the United Kingdom, Ireland, Spain, Netherlands, Germany, Greece, and Poland were included. Screening and baseline data (both collected before commencement of the intervention) were used in the present analyses, and participants were includedonly iftheycompleted FFQs at screeningand at baselinewithin a 1-mo timeframebeforethe commencement oftheintervention. Sociodemographic (e.g., sex andcountry) andlifestyle[e.g.,bodymass index(BMI,inkg/m2)and physical activity] characteristics were collected. Linear regression, correlation coefficients, concordance (percentage) in quartile classification, and Bland-Altman plots for daily intakes were used to assess reproducibility. Results: In total, 567 participants (59% female), with a mean 6 SD age of 38.7 6 13.4 y and BMI of 25.4 6 4.8, completed bothFFQswithin 1 mo(mean 6 SD: 19.26 6.2d).Exact plus adjacent classification oftotal energy intakeinparticipants was highest in Ireland (94%) and lowest in Poland (81%). Spearman correlation coefficients (r) in total energy intake between FFQs ranged from 0.50 for obese participants to 0.68 and 0.60 in normal-weight and overweight participants, respectively. Bland-Altman plots showed a mean difference between FFQs of 210 kcal/d, with the agreement deteriorating as energy intakes increased. There was little variation in reproducibility of total energy intakes between sex and age groups. Conclusions: The online Food4Me FFQ was shown to be reproducible across 7 European countries when administered within a 1-mo period to a large number of participants. The results support the utility of the online Food4Me FFQ as a reproducible tool across multiple European populations. This trial was registered at clinicaltrials.gov as NCT01530139.
Resumo:
Traditional knowledge about medicinal plants from a poorly studied region, the High Atlas in Morocco, is reported here for the first time; this permits consideration of efficacy and safety of current practices whilst highlighting species previously not known to have traditional medicinal use. Our study aims to document local medicinal plant knowledge among Tashelhit speaking communities through ethnobotanical survey, identifying preferred species and new medicinal plant citations and illuminating the relationship between emic and etic ailment classifications. Ethnobotanical data were collected using standard methods and with prior informed consent obtained before all interactions, data were characterized using descriptive indices and medicinal plants and healing strategies relevant to local livelihoods were identified. 151 vernacular names corresponding to 159 botanical species were found to be used to treat 36 folk ailments grouped in 14 biomedical use categories. Thirty-five (22%) are new medicinal plant records in Morocco, and 26 described as used for the first time anywhere. Fidelity levels (FL) revealed low specificity in plant use, particularly for the most commonly reported plants. Most plants are used in mixtures. Plant use is driven by local concepts of disease, including “hot” and “cold” classification and beliefs in supernatural forces. Local medicinal plant knowledge is rich in the High Atlas, where local populations still rely on medicinal plants for healthcare. We found experimental evidence of safe and effective use of medicinal plants in the High Atlas; but we highlight the use of eight poisonous species.