7 resultados para Categorize
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
A World Conservation Union (IUCN) regional red list is an objective assessment of regional extinction risk and is not the same as a list of conservation priority species. Recent research reveals the widespread, but incorrect, assumption that IUCN Red List categories represent a hierarchical list of priorities for conservation action. We developed a simple eight-step priority-setting process and applied it to the conservation of bees in Ireland. Our model is based on the national red list but also considers the global significance of the national population; the conservation status at global, continental, and regional levels; key biological, economic, and societal factors; and is compatible with existing conservation agreements and legislation. Throughout Ireland, almost one-third of the bee fauna is threatened (30 of 100 species), but our methodology resulted in a reduced list of only 17 priority species. We did not use the priority species list to broadly categorize species to the conservation action required; instead, we indicated the individual action required for all threatened, near-threatened, and data-deficient species on the national red list based on the IUCN's conservation-actions template file. Priority species lists will strongly influence prioritization of conservation actions at national levels, but action should not be exclusive to listed species. In addition, all species on this list will not necessarily require immediate action. Our method is transparent, reproducible, and readily applicable to other taxa and regions.
Resumo:
Temple Period Malta in the 3rd millennium BC saw the production of a range of figurative and decorative art and architecture that implies a richly populated spiritual and cognitive world associated with ritual practice in life and death. The paper explores the potential to categorize the figurative art into distinct groups, and how these various images might represent aspects of the cosmology and social concerns of a prehistoric island society.
Resumo:
Multi-threaded processors execute multiple threads concurrently in order to increase overall throughput. It is well documented that multi-threading affects per-thread performance but, more importantly, some threads are affected more than others. This is especially troublesome for multi-programmed workloads. Fairness metrics measure whether all threads are affected equally. However defining equal treatment is not straightforward. Several fairness metrics for multi-threaded processors have been utilized in the literature, although there does not seem to be a consensus on what metric does the best job of measuring fairness. This paper reviews the prevalent fairness metrics and analyzes their main properties. Each metric strikes a different trade-off between fairness in the strict sense and throughput. We categorize the metrics with respect to this property. Based on experimental data for SMT processors, we suggest using the minimum fairness metric in order to balance fairness and throughput.
Resumo:
The characterization and the definition of the complexity of objects is an important but very difficult problem that attracted much interest in many different fields. In this paper we introduce a new measure, called network diversity score (NDS), which allows us to quantify structural properties of networks. We demonstrate numerically that our diversity score is capable of distinguishing ordered, random and complex networks from each other and, hence, allowing us to categorize networks with respect to their structural complexity. We study 16 additional network complexity measures and find that none of these measures has similar good categorization capabilities. In contrast to many other measures suggested so far aiming for a characterization of the structural complexity of networks, our score is different for a variety of reasons. First, our score is multiplicatively composed of four individual scores, each assessing different structural properties of a network. That means our composite score reflects the structural diversity of a network. Second, our score is defined for a population of networks instead of individual networks. We will show that this removes an unwanted ambiguity, inherently present in measures that are based on single networks. In order to apply our measure practically, we provide a statistical estimator for the diversity score, which is based on a finite number of samples.
Resumo:
OBJECTIVES: Barrett’s esophagus (BE) is a common premalignant lesion for which surveillance is recommended. This strategy is limited by considerable variations in clinical practice. We conducted an international, multidisciplinary, systematic search and evidence-based review of BE and provided consensus recommendations for clinical use in patients with nondysplastic, indefinite, and low-grade dysplasia (LGD). METHODS: We defined the scope, proposed statements, and searched electronic databases, yielding 20,558 publications that were screened, selected online, and formed the evidence base. We used a Delphi consensus process, with an 80% agreement threshold, using GRADE (Grading of Recommendations Assessment, Development and Evaluation) to categorize the quality of evidence and strength of recommendations. RESULTS: In total, 80% of respondents agreed with 55 of 127 statements in the final voting rounds. Population endoscopic screening is not recommended and screening should target only very high-risk cases of males aged over 60 years with chronic uncontrolled reflux. A new international definition of BE was agreed upon. For any degree of dysplasia, at least two specialist gastrointestinal (GI) pathologists are required. Risk factors for cancer include male gender, length of BE, and central obesity. Endoscopic resection should be used for visible, nodular areas. Surveillance is not recommended for <5 years of life expectancy. Management strategies for indefinite dysplasia (IND) and LGD were identified, including a de-escalation strategy for lower-risk patients and escalation to intervention with follow-up for higher-risk patients. CONCLUSIONS: In this uniquely large consensus process in gastroenterology, we made key clinical recommendations for the escalation/de-escalation of BE in clinical practice. We made strong recommendations for the prioritization of future research.
Resumo:
With so many voices, groups, and organizations participating in the Emerging Church Movement (ECM), few are willing to “define” it, though authors have offered various definitions. Emerging Christians themselves do not offer systematic or coherent definitions, which contributes to frustration in isolating it as a coherent group – especially for sociologists who strive to define and categorize. In presenting our understanding of this movement, we categorize Emerging Christianity as an orientation rather than an identity, and focus on the diverse practices within what we describe as “pluralist congregations” (often called “gatherings,” “collectives” or “communities” by Emerging Christians themselves). This leads us to define the ECM as a creative, entrepreneurial religious movement that strives to achieve social legitimacy and spiritual vitality by actively disassociating from its roots in conservative, evangelical Christianity. Our findings are extensively developed in The Deconstructed Church: Understanding Emerging Christianity (Marti and Ganiel 2014).
Resumo:
Malware detection is a growing problem particularly on the Android mobile platform due to its increasing popularity and accessibility to numerous third party app markets. This has also been made worse by the increasingly sophisticated detection avoidance techniques employed by emerging malware families. This calls for more effective techniques for detection and classification of Android malware. Hence, in this paper we present an n-opcode analysis based approach that utilizes machine learning to classify and categorize Android malware. This approach enables automated feature discovery that eliminates the need for applying expert or domain knowledge to define the needed features. Our experiments on 2520 samples that were performed using up to 10-gram opcode features showed that an f-measure of 98% is achievable using this approach.