996 resultados para Query performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the effects of information request ambiguity and construct incongruence on end user's ability to develop SQL queries with an interactive relational database query language. In this experiment, ambiguity in information requests adversely affected accuracy and efficiency. Incongruities among the information request, the query syntax, and the data representation adversely affected accuracy, efficiency, and confidence. The results for ambiguity suggest that organizations might elicit better query development if end users were sensitized to the nature of ambiguities that could arise in their business contexts. End users could translate natural language queries into pseudo-SQL that could be examined for precision before the queries were developed. The results for incongruence suggest that better query development might ensue if semantic distances could be reduced by giving users data representations and database views that maximize construct congruence for the kinds of queries in typical domains. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to examine the relationship between skeletal muscle monocarboxylate transporters 1 and 4 (MCT1 and MCT4) expression, skeletal muscle oxidative capacity and endurance performance in trained cyclists. Ten well-trained cyclists (mean +/- SD; age 24.4 +/- 2.8 years, body mass 73.2 +/- 8.3 kg, VO(2max) 58 +/- 7 ml kg(-1) min(-1)) completed three endurance performance tasks [incremental exercise test to exhaustion, 2 and 10 min time trial (TT)]. In addition, a muscle biopsy sample from the vastus lateralis muscle was analysed for MCT1 and MCT4 expression levels together with the activity of citrate synthase (CS) and 3-hydroxyacyl-CoA dehydrogenase (HAD). There was a tendency for VO(2max) and peak power output obtained in the incremental exercise test to be correlated with MCT1 (r = -0.71 to -0.74; P < 0.06), but not MCT4. The average power output (P (average)) in the 2 min TT was significantly correlated with MCT4 (r = -0.74; P < 0.05) and HAD (r = -0.92; P < 0.01). The P (average) in the 10 min TT was only correlated with CS activity (r = 0.68; P < 0.05). These results indicate the relationship between MCT1 and MCT4 as well as cycle TT performance may be influenced by the length and intensity of the task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new technologies that use peer-to-peer networks grows every day, with the object to supply the need of sharing information, resources and services of databases around the world. Among them are the peer-to-peer databases that take advantage of peer-to-peer networks to manage distributed knowledge bases, allowing the sharing of information semantically related but syntactically heterogeneous. However, it is a challenge to ensure the efficient search for information without compromising the autonomy of each node and network flexibility, given the structural characteristics of these networks. On the other hand, some studies propose the use of ontology semantics by assigning standardized categorization of information. The main original contribution of this work is the approach of this problem with a proposal for optimization of queries supported by the Ant Colony algorithm and classification though ontologies. The results show that this strategy enables the semantic support to the searches in peer-to-peer databases, aiming to expand the results without compromising network performance. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The exon-1 of the androgen receptor (AR) gene contains two repeat length polymorphisms which modify either the amount of AR protein inside the cell (GGN(n), polyglycine) or its transcriptional activity (CAG(n), polyglutamine). Shorter CAG and/or GGN repeats provide stronger androgen signalling and vice versa. To test the hypothesis that CAG and GGN repeat AR polymorphisms affect muscle mass and various variables of muscular strength phenotype traits, the length of CAG and GGN repeats was determined by PCR and fragment analysis and confirmed by DNA sequencing of selected samples in 282 men (28.6 +/- 7.6 years). Individuals were grouped as CAG short (CAG(S)) if harbouring repeat lengths of 21. GGN was considered short (GGN(S)) or long (GGN(L)) if GGN 23, respectively. No significant differences in lean body mass or fitness were observed between the CAG(S) and CAG(L) groups, or between GGN(S) and GGN(L) groups, but a trend for a correlation was found for the GGN repeat and lean mass of the extremities (r=-0.11, p=0.06). In summary, the lengths of CAG and GGN repeat of the AR gene do not appear to influence lean mass or fitness in young men.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this study was to determine the influence of activity performed during the recovery period on the aerobic and anaerobic energy yield, as well as on performance, during high-intensity intermittent exercise (HIT). Ten physical education students participated in the study. First they underwent an incremental exercise test to assess their maximal power output (Wmax) and VO2max. On subsequent days they performed three different HITs. Each HIT consisted of four cycling bouts until exhaustion at 110% Wmax. Recovery periods of 5 min were allowed between bouts. HITs differed in the kind of activity performed during the recovery periods: pedaling at 20% VO2max (HITA), stretching exercises, or lying supine. Performance was 3-4% and aerobic energy yield was 6-8% (both p < 0.05) higher during the HITA than during the other two kinds of HIT. The greater contribution of aerobic metabolism to the energy yield during the high-intensity exercise bouts with active recovery was due to faster VO2 kinetics (p< 0.01) and a higher VO2peak during the exercise bouts preceded by active recovery (p < 0.05). In contrast, the anaerobic energy yield (oxygen deficit and peak blood lactate concentrations) was similar in all HITs. Therefore, this study shows that active recovery facilitates performance by increasing aerobic contribution to the whole energy yield turnover during high-intensity intermittent exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The aim of this study was to evaluate the effects of severe acute hypoxia on exercise performance and metabolism during 30-s Wingate tests. Five endurance- (E) and five sprint- (S) trained track cyclists from the Spanish National Team performed 30-s Wingate tests in normoxia and hypoxia (inspired O(2) fraction = 0.10). Oxygen deficit was estimated from submaximal cycling economy tests by use of a nonlinear model. E cyclists showed higher maximal O(2) uptake than S (72 +/- 1 and 62 +/- 2 ml x kg(-1) x min(-1), P < 0.05). S cyclists achieved higher peak and mean power output, and 33% larger oxygen deficit than E (P < 0.05). During the Wingate test in normoxia, S relied more on anaerobic energy sources than E (P < 0.05); however, S showed a larger fatigue index in both conditions (P < 0.05). Compared with normoxia, hypoxia lowered O(2) uptake by 16% in E and S (P < 0.05). Peak power output, fatigue index, and exercise femoral vein blood lactate concentration were not altered by hypoxia in any group. Endurance cyclists, unlike S, maintained their mean power output in hypoxia by increasing their anaerobic energy production, as shown by 7% greater oxygen deficit and 11% higher postexercise lactate concentration. In conclusion, performance during 30-s Wingate tests in severe acute hypoxia is maintained or barely reduced owing to the enhancement of the anaerobic energy release. The effect of severe acute hypoxia on supramaximal exercise performance depends on training background.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiresolution Triangular Mesh (MTM) models are widely used to improve the performance of large terrain visualization by replacing the original model with a simplified one. MTM models, which consist of both original and simplified data, are commonly stored in spatial database systems due to their size. The relatively slow access speed of disks makes data retrieval the bottleneck of such terrain visualization systems. Existing spatial access methods proposed to address this problem rely on main-memory MTM models, which leads to significant overhead during query processing. In this paper, we approach the problem from a new perspective and propose a novel MTM called direct mesh that is designed specifically for secondary storage. It supports available indexing methods natively and requires no modification to MTM structure. Experiment results, which are based on two real-world data sets, show an average performance improvement of 5-10 times over the existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many advanced applications, data are described by multiple high-dimensional features. Moreover, different queries may weight these features differently; some may not even specify all the features. In this paper, we propose our solution to support efficient query processing in these applications. We devise a novel representation that compactly captures f features into two components: The first component is a 2D vector that reflects a distance range ( minimum and maximum values) of the f features with respect to a reference point ( the center of the space) in a metric space and the second component is a bit signature, with two bits per dimension, obtained by analyzing each feature's descending energy histogram. This representation enables two levels of filtering: The first component prunes away points that do not share similar distance ranges, while the bit signature filters away points based on the dimensions of the relevant features. Moreover, the representation facilitates the use of a single index structure to further speed up processing. We employ the classical B+-tree for this purpose. We also propose a KNN search algorithm that exploits the access orders of critical dimensions of highly selective features and partial distances to prune the search space more effectively. Our extensive experiments on both real-life and synthetic data sets show that the proposed solution offers significant performance advantages over sequential scan and retrieval methods using single and multiple VA-files.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data are particularly useful in mobile environments. However, due to the low bandwidth of most wireless networks, developing large spatial database applications becomes a challenging process. In this paper, we provide the first attempt to combine two important techniques, multiresolution spatial data structure and semantic caching, towards efficient spatial query processing in mobile environments. Based on the study of the characteristics of multiresolution spatial data (MSD) and multiresolution spatial query, we propose a new semantic caching model called Multiresolution Semantic Caching (MSC) for caching MSD in mobile environments. MSC enriches the traditional three-category query processing in semantic cache to five categories, thus improving the performance in three ways: 1) a reduction in the amount and complexity of the remainder queries; 2) the redundant transmission of spatial data already residing in a cache is avoided; 3) a provision for satisfactory answers before 100% query results have been transmitted to the client side. Our extensive experiments on a very large and complex real spatial database show that MSC outperforms the traditional semantic caching models significantly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiresolution (or multi-scale) techniques make it possible for Web-based GIS applications to access large dataset. The performance of such systems relies on data transmission over network and multiresolution query processing. In the literature the latter has received little research attention so far, and the existing methods are not capable of processing large dataset. In this paper, we aim to improve multiresolution query processing in an online environment. A cost model for such query is proposed first, followed by three strategies for its optimization. Significant theoretical improvement can be observed when comparing against available methods. Application of these strategies is also discussed, and similar performance enhancement can be expected if implemented in online GIS applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation studies the caching of queries and how to cache in an efficient way, so that retrieving previously accessed data does not need any intermediary nodes between the data-source peer and the querying peer in super-peer P2P network. A precise algorithm was devised that demonstrated how queries can be deconstructed to provide greater flexibility for reusing their constituent elements. It showed how subsequent queries can make use of more than one previous query and any part of those queries to reconstruct direct data communication with one or more source peers that have supplied data previously. In effect, a new query can search and exploit the entire cached list of queries to construct the list of the data locations it requires that might match any locations previously accessed. The new method increases the likelihood of repeat queries being able to reuse earlier queries and provides a viable way of by-passing shared data indexes in structured networks. It could also increase the efficiency of unstructured networks by reducing traffic and the propensity for network flooding. In addition, performance evaluation for predicting query routing performance by using a UML sequence diagram is introduced. This new method of performance evaluation provides designers with information about when it is most beneficial to use caching and how the peer connections can optimize its exploitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of performance management of Complex Event Processing (CEP) systems. Since CEP systems have distinct characteristics from other well-studied computer systems such as batch and online transaction processing systems and database-centric applications, these characteristics introduce new challenges and opportunities to the performance management for CEP systems. Methodologies used in benchmarking CEP systems in many performance studies focus on scaling the load injection, but not considering the impact of the functional capabilities of CEP systems. This thesis proposes the approach of evaluating the performance of CEP engines’ functional behaviours on events and develops a benchmark platform for CEP systems: CEPBen. The CEPBen benchmark platform is developed to explore the fundamental functional performance of event processing systems: filtering, transformation and event pattern detection. It is also designed to provide a flexible environment for exploring new metrics and influential factors for CEP systems and evaluating the performance of CEP systems. Studies on factors and new metrics are carried out using the CEPBen benchmark platform on Esper. Different measurement points of response time in performance management of CEP systems are discussed and response time of targeted event is proposed to be used as a metric for quality of service evaluation combining with the traditional response time in CEP systems. Maximum query load as a capacity indicator regarding to the complexity of queries and number of live objects in memory as a performance indicator regarding to the memory management are proposed in performance management of CEP systems. Query depth is studied as a performance factor that influences CEP system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.