972 resultados para Specification searching


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability. Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Retrieval and Tweet Contextualization tracks). INEX 2013 was an exciting year for INEX in which we consolidated the collaboration with (other activities in) CLEF and for the second time ran our workshop as part of the CLEF labs in order to facilitate knowledge transfer between the evaluation forums. This paper gives an overview of all the INEX 2013 tracks, their aims and task, the built test-collections, and gives an initial analysis of the results

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The traditional structural design procedure, especially for the large-scale and complex structures, is time consuming and inefficient. This is due primarily to the fact that the traditional design takes the second-order effects indirectly by virtue of design specifications for every member instead of system analysis for a whole structure. Consequently, the complicated and tedious design procedures are inevitably necessary to consider the second-order effects for the member level in design specification. They are twofold in general: 1) Flexural buckling due to P-d effect, i.e. effective length. 2) Sway effect due to P-D effect, i.e. magnification factor. In this study, a new system design concept based on the second-order elastic analysis is presented, in which the second-order effects are taken into account directly in the system analysis, and also to avoid the tedious member-by-member stability check. The plastic design on the basis of this integrated method of direct approach is ignored in this paper for simplicity and clarity, as the only emphasis is placed on the difference between the second-order elastic limit-state design and present system design approach. A practical design example, a 57m-span dome steel skylight structure, is used to demonstrate the efficiency and effectiveness of the proposed approach. This skylight structure is also designed by the traditional design approach BS5950-2000 for comparison on which the emphasis of aforementioned P-d and P-D effects is placed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The support for typically out-of-vocabulary query terms such as names, acronyms, and foreign words is an important requirement of many speech indexing applications. However, to date many unrestricted vocabulary indexing systems have struggled to provide a balance between good detection rate and fast query speeds. This paper presents a fast and accurate unrestricted vocabulary speech indexing technique named Dynamic Match Lattice Spotting (DMLS). The proposed method augments the conventional lattice spotting technique with dynamic sequence matching, together with a number of other novel algorithmic enhancements, to obtain a system that is capable of searching hours of speech in seconds while maintaining excellent detection performance

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As organizations attempt to become more business process-oriented, existing role descriptions are revised and entire new business process-related roles emerge. A lot of attention is often being paid to the technological aspect of Business Process Management (BPM), but relatively little work has been done concerning the people factor of BPM and the specification of BPM expertise in particular. This study tries to close this gap by proposing a comprehensive BPM expertise model, which consolidates existing theories and related work. This model describes the key attributes characterizing “BPM expertise” and outlines their structure, dynamics, and interrelationships. Understanding BPM expertise is a predecessor to being able to develop and apply it effectively. This is the cornerstone of human capital and talent management in BPM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A long query provides more useful hints for searching relevant documents, but it is likely to introduce noise which affects retrieval performance. In order to smooth such adverse effect, it is important to reduce noisy terms, introduce and boost additional relevant terms. This paper presents a comprehensive framework, called Aspect Hidden Markov Model (AHMM), which integrates query reduction and expansion, for retrieval with long queries. It optimizes the probability distribution of query terms by utilizing intra-query term dependencies as well as the relationships between query terms and words observed in relevance feedback documents. Empirical evaluation on three large-scale TREC collections demonstrates that our approach, which is automatic, achieves salient improvements over various strong baselines, and also reaches a comparable performance to a state of the art method based on user’s interactive query term reduction and expansion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new algorithm based on a Hybrid Particle Swarm Optimization (PSO) and Simulated Annealing (SA) called PSO-SA to estimate harmonic state variables in distribution networks. The proposed algorithm performs estimation for both amplitude and phase of each harmonic currents injection by minimizing the error between the measured values from Phasor Measurement Units (PMUs) and the values computed from the estimated parameters during the estimation process. The proposed algorithm can take into account the uncertainty of the harmonic pseudo measurement and the tolerance in the line impedances of the network as well as uncertainty of the Distributed Generators (DGs) such as Wind Turbines (WT). The main feature of proposed PSO-SA algorithm is to reach quickly around the global optimum by PSO with enabling a mutation function and then to find that optimum by SA searching algorithm. Simulation results on IEEE 34 bus radial and a realistic 70-bus radial test networks are presented to demonstrate the speed and accuracy of proposed Distribution Harmonic State Estimation (DHSE) algorithm is extremely effective and efficient in comparison with the conventional algorithms such as Weight Least Square (WLS), Genetic Algorithm (GA), original PSO and Honey Bees Mating Optimization (HBMO) algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Along with the tri-lineage of bone, cartilage and fat, human mesenchymal stem cells (hMSCs) retain neural lineage potential. Multiple factors have been described that influence lineage fate of hMSCs including the extracellular microenvironment or niche. The niche includes the extracellular matrix (ECM) providing structural composition, as well as other associated proteins and growth factors, which collectively influence hMSC stemness and lineage specification. As such, lineage specific differentiation of MSCs is mediated through interactions including cell–cell and cell–matrix, as well as through specific signalling pathways triggering downstream events. Proteoglycans (PGs) are ubiquitous within this microenvironment and can be localised to the cell surface or embedded within the ECM. In addition, the heparan sulfate (HS) and chondroitin sulfate (CS) families of PGs interact directly with a number of growth factors, signalling pathways and ECM components including FGFs, Wnts and fibronectin. With evidence supporting a role for HSPGs and CSPGs in the specification of hMSCs down the osteogenic, chondrogenic and adipogenic lineages, along with the localisation of PGs in development and regeneration, it is conceivable that these important proteins may also play a role in the differentiation of hMSCs toward the neuronal lineage. Here we summarise the current literature and highlight the potential for HSPG directed neural lineage fate specification in hMSCs, which may provide a new model for brain damage repair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Searching for health advice on the web is becoming increasingly common. Because of the great importance of this activity for patients and clinicians and the effect that incorrect information may have on health outcomes, it is critical to present relevant and valuable information to a searcher. Previous evaluation campaigns on health information retrieval (IR) have provided benchmarks that have been widely used to improve health IR and record these improvements. However, in general these benchmarks have targeted the specialised information needs of physicians and other healthcare workers. In this paper, we describe the development of a new collection for evaluation of effectiveness in IR seeking to satisfy the health information needs of patients. Our methodology features a novel way to create statements of patients’ information needs using realistic short queries associated with patient discharge summaries, which provide details of patient disorders. We adopt a scenario where the patient then creates a query to seek information relating to these disorders. Thus, discharge summaries provide us with a means to create contextually driven search statements, since they may include details on the stage of the disease, family history etc. The collection will be used for the first time as part of the ShARe/-CLEF 2013 eHealth Evaluation Lab, which focuses on natural language processing and IR for clinical care.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Climate change is affecting and will increasingly influence human health and wellbeing. Children are particularly vulnerable to the impact of climate change. An extensive literature review regarding the impact of climate change on children’s health was conducted in April 2012 by searching electronic databases PubMed, Scopus, ProQuest, ScienceDirect, and Web of Science, as well as relevant websites, such as IPCC and WHO. Climate change affects children’s health through increased air pollution, more weather-related disasters, more frequent and intense heat waves, decreased water quality and quantity, food shortage and greater exposure to toxicants. As a result, children experience greater risk of mental disorders, malnutrition, infectious diseases, allergic diseases and respiratory diseases. Mitigation measures like reducing carbon pollution emissions, and adaptation measures such as early warning systems and post-disaster counseling are strongly needed. Future health research directions should focus on: (1) identifying whether climate change impacts on children will be modified by gender, age and socioeconomic status; (2) refining outcome measures of children’s vulnerability to climate change; (3) projecting children’s disease burden under climate change scenarios; (4) exploring children’s disease burden related to climate change in low-income countries, and ; (5) identifying the most cost-effective mitigation and adaptation actions from a children’s health perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the use of crowdfunding platforms to fund academic research. Looking specifically at the use of a Pozible campaign to raise funds for a small pilot research study into home education in Australia, the paper reports on the success and problems of using the platform. It also examines the crowdsourcing of literature searching as part of the package. The paper looks at the realities of using this type of platform to gain start–up funding for a project and argues that families and friends are likely to be the biggest supporters. The finding that family and friends are likely to be the highest supporters supports similar work in the arts communities that are traditionally served by crowdfunding platforms. The paper argues that, with exceptions, these platforms can be a source of income in times where academics are finding it increasingly difficult to source government funding for projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RC4(n, m) is a stream cipher based on RC4 and is designed by G. Gong et al. It can be seen as a generalization of the famous RC4 stream cipher designed by Ron Rivest. The authors of RC4(n, m) claim that the cipher resists all the attacks that are successful against the original RC4. The paper reveals cryptographic weaknesses of the RC4(n, m) stream cipher. We develop two attacks. The first one is based on non-randomness of internal state and allows to distinguish it from a truly random cipher by an algorithm that has access to 24·n bits of the keystream. The second attack exploits low diffusion of bits in the KSA and PRGA algorithms and recovers all bytes of the secret key. This attack works only if the initial value of the cipher can be manipulated. Apart from the secret key, the cipher uses two other inputs, namely, initial value and initial vector. Although these inputs are fixed in the cipher specification, some applications may allow the inputs to be under the attacker control. Assuming that the attacker can control the initial value, we show a distinguisher for the cipher and a secret key recovery attack that for the L-bit secret key, is able to recover it with about (L/n) · 2n steps. The attack has been implemented on a standard PC and can reconstruct the secret key of RC(8, 32) in less than a second.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper makes a formal security analysis of the current Australian e-passport implementation using model checking tools CASPER/CSP/FDR. We highlight security issues in the current implementation and identify new threats when an e-passport system is integrated with an automated processing system like SmartGate. The paper also provides a security analysis of the European Union (EU) proposal for Extended Access Control (EAC) that is intended to provide improved security in protecting biometric information of the e-passport bearer. The current e-passport specification fails to provide a list of adequate security goals that could be used for security evaluation. We fill this gap; we present a collection of security goals for evaluation of e-passport protocols. Our analysis confirms existing security weaknesses that were previously identified and shows that both the Australian e-passport implementation and the EU proposal fail to address many security and privacy aspects that are paramount in implementing a secure border control mechanism. ACM Classification C.2.2 (Communication/Networking and Information Technology – Network Protocols – Model Checking), D.2.4 (Software Engineering – Software/Program Verification – Formal Methods), D.4.6 (Operating Systems – Security and Privacy Protection – Authentication)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro evaluation of IR systems. Alternative to this paradigm, laboratory-based user studies have been widely used to evaluate interactive information retrieval (IIR) systems, and at the same time investigate users’ information searching behaviours. Major drawbacks of laboratory-based user studies for evaluating IIR systems include the high monetary and temporal costs involved in setting up and running those experiments, the lack of heterogeneity amongst the user population and the limited scale of the experiments, which usually involve a relatively restricted set of users. In this paper, we propose an alternative experimental methodology to laboratory-based user studies. Our novel experimental methodology uses a crowdsourcing platform as a means of engaging study participants. Through crowdsourcing, our experimental methodology can capture user interactions and searching behaviours at a lower cost, with more data, and within a shorter period than traditional laboratory-based user studies, and therefore can be used to assess the performances of IIR systems. In this article, we show the characteristic differences of our approach with respect to traditional IIR experimental and evaluation procedures. We also perform a use case study comparing crowdsourcing-based evaluation with laboratory-based evaluation of IIR systems, which can serve as a tutorial for setting up crowdsourcing-based IIR evaluations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Timely diagnosis and reporting of patient symptoms in hospital emergency departments (ED) is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. Aims The aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach. Results The automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80. Conclusion While the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP) and machine learning techniques.