909 resultados para Keys to Database Searching


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis provides a proposal to divide Alycidae G. Canestrini & Fanzago into two subfamilies and four tribes. This new hierarchy is based on a reassessment and reranking of new and previously known synapomorphies of the clusters concerned by cladistic analysis, using 60 morphological characters for 48 ingroup species. The basic characters of the taxa are illustrated either by SEM micrographs (Scanning Electron Microscopy) or by outline drawings. The presented classification includes the definitions of Alycini G. Canestrini & Fanzago new rank; Bimichaeliini Womersley new rank; Petralycini new rank; and the (re)descriptions of Alycus C.L. Koch, Pachygnathus Dugès, Amphialycus Zachvatkin, Bimichaelia Thor and Laminamichaelia gen. nov. The species described or redescribed are: Pachygnathus wasastjernae sp. nov. from Kvarken (Merenkurkku), Finland; Pachygnathus villosus Dugès (in Oken); Alycus roseus C.L. Koch; Alycus denasutus (Grandjean) comb. and stat. nov.; Alycus trichotus (Grandjean) comb. nov.; Alycus marinus (Schuster) comb. nov.; Amphialycus (Amphialycus) pentophthalmus Zachvatkin; Amphialycus (Amphialycus) leucogaster (Grandjean); and Amphialycus (Orthacarus) oblongus (Halbert) comb. nov.; Bimichaelia augustana (Berlese); Bimichaelia sarekensis Trägårdh; Laminamichaelia setigera (Berlese) comb. nov.; Laminamichelia arbusculosa (Grandjean) comb. nov.; Laminamichelia subnuda (Berlese) comb. nov. and Petralycus unicornis Grandjean. Fourteen nominal species were found to be junior synonymies. The importance of sensory organs in taxonomy is well recognized, but inclusion of the elaborate skin pattern seemed to improve essentially the usefulness of the prodorsal sensory area. The detailed pictures of the prodorsa of the European alycids could be used like passport photographs for the species. A database like this of prodorsa of other mite taxa as well might be an answer to future needs of species identification in soil zoology, ecology and conservation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detect and Avoid (DAA) technology is widely acknowledged as a critical enabler for unsegregated Remote Piloted Aircraft (RPA) operations, particularly Beyond Visual Line of Sight (BVLOS). Image-based DAA, in the visible spectrum, is a promising technological option for addressing the challenges DAA presents. Two impediments to progress for this approach are the scarcity of available video footage to train and test algorithms, in conjunction with testing regimes and specifications which facilitate repeatable, statistically valid, performance assessment. This paper includes three key contributions undertaken to address these impediments. In the first instance, we detail our progress towards the creation of a large hybrid collision and near-collision encounter database. Second, we explore the suitability of techniques employed by the biometric research community (Speaker Verification and Language Identification), for DAA performance optimisation and assessment. These techniques include Detection Error Trade-off (DET) curves, Equal Error Rates (EER), and the Detection Cost Function (DCF). Finally, the hybrid database and the speech-based techniques are combined and employed in the assessment of a contemporary, image based DAA system. This system includes stabilisation, morphological filtering and a Hidden Markov Model (HMM) temporal filter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MLDB (macromolecule ligand database) is a knowledge base containing ligands co-crystallized with the three-dimensional structures available in the Protein Data Bank. The proposed knowledge base serves as an open resource for the analysis and visualization of all ligands and their interactions with macromolecular structures. MLDB can be used to search ligands, and their interactions can be visualized both in text and graphical formats. MLDB will be updated at regular intervals (weekly) with automated Perl scripts. The knowledge base is intended to serve the scientific community working in the areas of molecular and structural biology. It is available free to users around the clock and can be accessed at http://dicsoft2.physics.iisc.ernet.in/mldb/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining involves nontrivial process of extracting knowledge or patterns from large databases. Genetic Algorithms are efficient and robust searching and optimization methods that are used in data mining. In this paper we propose a Self-Adaptive Migration Model GA (SAMGA), where parameters of population size, the number of points of crossover and mutation rate for each population are adaptively fixed. Further, the migration of individuals between populations is decided dynamically. This paper gives a mathematical schema analysis of the method stating and showing that the algorithm exploits previously discovered knowledge for a more focused and concentrated search of heuristically high yielding regions while simultaneously performing a highly explorative search on the other regions of the search space. The effective performance of the algorithm is then shown using standard testbed functions and a set of actual classification datamining problems. Michigan style of classifier was used to build the classifier and the system was tested with machine learning databases of Pima Indian Diabetes database, Wisconsin Breast Cancer database and few others. The performance of our algorithm is better than others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the diseases affecting public health, like hypertension, are multifactorial by etiology. Hypertension is influenced by genetic, life style and environmental factors. Estimation of the influence of genes to the risk of essential hypertension varies from 30 to 50%. It is plausible that in most of the cases susceptibility to hypertension is determined by the action of more than one gene. Although the exact molecular mechanism underlying essential hypertension remains obscure, several monogenic forms of hypertension have been identified. Since common genetic variations may predict, not only to susceptibility to hypertension, but also response to antihypertensive drug therapy, pharmacogenetic approaches may provide useful markers in finding relations between candidate genes and phenotypes of hypertension. The aim of this study was to identify genetic mutations and polymorphisms contributing to human hypertension, and examine their relationships to intermediate phenotypes of hypertension, such as blood pressure (BP) responses to antihypertensive drugs or biochemical laboratory values. Two groups of patients were investigated in the present study. The first group was collected from the database of patients investigated in the Hypertension Outpatient Ward, Helsinki University Central Hospital, and consisted of 399 subjects considered to have essential hypertension. Frequncies of the mutant or variant alleles were compared with those in two reference groups, healthy blood donors (n = 301) and normotensive males (n = 175). The second group of subjects with hypertension was collected prospectively. The study subjects (n=313) underwent a protocol lasting eight months, including four one-month drug treatment periods with antihypertensive medications (thiazide diuretic, β-blocker, calcium channel antagonist, and an angiotensin II receptor antagonist). BP responses and laboratory values were related to polymorphims of several candidate genes of the renin-angiotensin system (RAS). In addition, two patients with typical features of Liddle’s syndrome were screened for mutations in kidney epithelial sodium channel (ENaC) subunits. Two novel mutations causing Liddle’s syndrome were identified. The first mutation identified located in the beta-subunit of ENaC and the second mutation found located in the gamma-subunit, constituting the first identified Liddle mutation locating in the extracellular domain. This mutation showed 2-fold increase in channel activity in vitro. Three gene variants, of which two are novel, were identified in ENaC subunits. The prevalence of the variants was three times higher in hypertensive patients (9%) than in reference groups (3%). The variant carriers had increased daily urinary potassium excretion rate in relation to their renin levels compared with controls suggesting increased ENaC activity, although in vitro they did not show increased channel activity. Of the common polymorphisms of the RAS studied, angiotensin II receptor type I (AGTR1) 1166 A/C polymorphism was associated with modest changes in RAS activity. Thus, patients homozygous for the C allele tended to have increased aldosterone and decreased renin levels. In vitro functional studies using transfected HEK293 cells provided additional evidence that the AGTR1 1166 C allele may be associated with increased expression of the AGTR1. Common polymorphisms of the alpha-adducin and the RAS genes did not significantly predict BP responses to one-month monotherapies with hydroclorothiazide, bisoprolol, amlodipin, or losartan. In conclusion, two novel mutations of ENaC subunits causing Liddle’s syndrome were identified. In addition, three common ENaC polymorphisms were shown to be associated with occurrence of essential hypertension, but their exact functional and clinical consequences remain to be explored. The AGTR1 1166 C allele may modify the endocrine phenotype of hypertensive patients, when present in homozygous form. Certain widely studied polymorphisms of the ACE, angiotensinogen, AGTR1 and alpha-adducin genes did not significantly affect responses to a thiazide, β-blocker, calcium channel antagonist, and angiotensin II receptor antagonist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel, language-neutral approach for searching online handwritten text using Frechet distance. Online handwritten data, which is available as a time series (x,y,t), is treated as representing a parameterized curve in two-dimensions and the problem of searching online handwritten text is posed as a problem of matching two curves in a two-dimensional Euclidean space. Frechet distance is a natural measure for matching curves. The main contribution of this paper is the formulation of a variant of Frechet distance that can be used for retrieving words even when only a prefix of the word is given as query. Extensive experiments on UNIPEN dataset(1) consisting of over 16,000 words written by 7 users show that our method outperforms the state-of-the-art DTW method. Experiments were also conducted on a Multilingual dataset, generated on a PDA, with encouraging results. Our approach can be used to implement useful, exciting features like auto-completion of handwriting in PDAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile applications are being increasingly deployed on a massive scale in various mobile sensor grid database systems. With limited resources from the mobile devices, how to process the huge number of queries from mobile users with distributed sensor grid databases becomes a critical problem for such mobile systems. While the fundamental semantic cache technique has been investigated for query optimization in sensor grid database systems, the problem is still difficult due to the fact that more realistic multi-dimensional constraints have not been considered in existing methods. To solve the problem, a new semantic cache scheme is presented in this paper for location-dependent data queries in distributed sensor grid database systems. It considers multi-dimensional constraints or factors in a unified cost model architecture, determines the parameters of the cost model in the scheme by using the concept of Nash equilibrium from game theory, and makes semantic cache decisions from the established cost model. The scenarios of three factors of semantic, time and locations are investigated as special cases, which improve existing methods. Experiments are conducted to demonstrate the semantic cache scheme presented in this paper for distributed sensor grid database systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to share the key elements of an evaluation framework to determine the true clinical outcomes of bone-anchored prostheses. Scientists, clinicians and policy makers are encouraged to implement their own evaluations relying on the proposed framework using a single database to facilitate reflective practice and, eventually, robust prospective studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Grover's database search algorithm, although discovered in the context of quantum computation, can be implemented using any physical system that allows superposition of states. A physical realization of this algorithm is described using coupled simple harmonic oscillators, which can be exactly solved in both classical and quantum domains. Classical wave algorithms are far more stable against decoherence compared to their quantum counterparts. In addition to providing convenient demonstration models, they may have a role in practical situations, such as catalysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of secure path key establishment in wireless sensor networks that uses the random key predistribution technique. Inspired by the recent proxy-based scheme in [1] and [2], we introduce a fiiend-based scheme for establishing pairwise keys securely. We show that the chances of finding friends in a neighbourhood are considerably more than that of finding proxies, leading to lower communication overhead. Further, we prove that the friendbased scheme performs better than the proxy-based scheme in terms of resilience against node capture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many real-time database applications arise in electronic financial services, safety-critical installations and military systems where enforcing security is crucial to the success of the enterprise. For real-time database systems supporting applications with firm deadlines, we investigate here the performance implications, in terms of killed transactions, of guaranteeing multilevel secrecy. In particular, we focus on the concurrency control (CC) aspects of this issue. Our main contributions are the following: First, we identify which among the previously proposed real-time CC protocols are capable of providing covert-channel-free security. Second, using a detailed simulation model, we profile the real-time performance of a representative set of these secure CC protocols for a variety of security-classified workloads and system configurations. Our experiments show that a prioritized optimistic CC protocol, OPT-WAIT, provides the best overall performance. Third, we propose and evaluate a novel "dual-CC" approach that allows the real-time database system to simultaneously use different CC mechanisms for guaranteeing security and for improving real-time performance. By appropriately choosing these different mechanisms, concurrency control protocols that provide even better performance than OPT-WAIT are designed. Finally, we propose and evaluate GUARD, an adaptive admission-control policy designed to provide fairness with respect to the distribution of killed transactions across security levels. Our experiments show that GUARD efficiently provides close to ideal fairness for real-time applications that can tolerate covert channel bandwidths of upto one bit per second.