903 resultados para Affine Blocking Sets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to quantify energy expenditure (EE) during multiple sets of leg press (LP) and bench press (BP) exercises in 10 males with at least 1 yr of resistance training (RT). The subjects underwent two sessions to determine 1 repetition maximum (1RM) on the BP and LP and one protocol consisting of a warm up and 4 sets for 10 repetitions at 70% 1RM with a 3-min rest period between sets for each exercise. Energy expenditure was calculated as the sum of oxygen uptake (aerobic component), EPOC, and lactate production (anaerobic component). There were no significant differences in EE between exercises for sets 1 to 4 and the total energy expended. However, statistical analysis revealed a significant difference (P<0.05) between exercises in RT economy (BP, 0.0206 ± 0.0044 kcal·kg-1 vs. LP, 0.0051 ± 0.0015 kcal·kg-1). Within exercise comparison showed set 4 was significantly different from sets 1 and 3 for BP, and for LP a significant difference was found between set 4 and sets 1, 2 and 3. Our results point to an increase in EE during multiple sets at 70% 1RM and show that in spite of the difference in muscle mass involved and total work done during each type of exercise, EE was not different due to greater economy during the LP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Project aims to develop methods for data classification in a Data Warehouse for decision-making purposes. We also have as another goal the reduction of an attribute set in a Data Warehouse, in which a given reduced set is capable of keeping the same properties of the original one. Once we achieve a reduced set, we have a smaller computational cost of processing, we are able to identify non-relevant attributes to certain kinds of situations, and finally we are also able to recognize patterns in the database that will help us to take decisions. In order to achieve these main objectives, it will be implemented the Rough Sets algorithm. We chose PostgreSQL as our data base management system due to its efficiency, consolidation and finally, it’s an open-source system (free distribution)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper some aspects on chaotic behavior and minimality in planar piecewise smooth vector fields theory are treated. The occurrence of non-deterministic chaos is observed and the concept of orientable minimality is introduced. Some relations between minimality and orientable minimality are also investigated and the existence of new kinds of non-trivial minimal sets in chaotic systems is observed. The approach is geometrical and involves the ordinary techniques of non-smooth systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role played by the attainable set of a differential inclusion, in the study of dynamic control systems and fuzzy differential equations, is widely acknowledged. A procedure for estimating the attainable set is rather complicated compared to the numerical methods for differential equations. This article addresses an alternative approach, based on an optimal control tool, to obtain a description of the attainable sets of differential inclusions. In particular, we obtain an exact delineation of the attainable set for a large class of nonlinear differential inclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empirical phylogeographic studies have progressively sampled greater numbers of loci over time, in part motivated by theoretical papers showing that estimates of key demographic parameters improve as the number of loci increases. Recently, next-generation sequencing has been applied to questions about organismal history, with the promise of revolutionizing the field. However, no systematic assessment of how phylogeographic data sets have changed over time with respect to overall size and information content has been performed. Here, we quantify the changing nature of these genetic data sets over the past 20years, focusing on papers published in Molecular Ecology. We found that the number of independent loci, the total number of alleles sampled and the total number of single nucleotide polymorphisms (SNPs) per data set has improved over time, with particularly dramatic increases within the past 5years. Interestingly, uniparentally inherited organellar markers (e.g. animal mitochondrial and plant chloroplast DNA) continue to represent an important component of phylogeographic data. Single-species studies (cf. comparative studies) that focus on vertebrates (particularly fish and to some extent, birds) represent the gold standard of phylogeographic data collection. Based on the current trajectory seen in our survey data, forecast modelling indicates that the median number of SNPs per data set for studies published by the end of the year 2016 may approach similar to 20000. This survey provides baseline information for understanding the evolution of phylogeographic data sets and underscores the fact that development of analytical methods for handling very large genetic data sets will be critical for facilitating growth of the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We construct a centerless W-infinity type of algebra in terms of a generator of a centerless Virasoro algebra and an abelian spin 1 current. This algebra conventionally emerges in the study of pseudo-differential operators on a circle or alternatively within KP hierarchy with Watanabe's bracket. Construction used here is based on a spherical deformation of the algebra W ∞ of area preserving diffeomorphisms of a 2-manifold. We show that this deformation technique applies to the two-loop WZNW and conformal affine Toda models, establishing henceforth W ∞ invariance of these models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use Hirota's method formulated as a recursive scheme to construct a complete set of soliton solutions for the affine Toda field theory based on an arbitrary Lie algebra. Our solutions include a new class of solitons connected with two different types of degeneracies encountered in Hirota's perturbation approach. We also derive an universal mass formula for all Hirota's solutions to the affine Toda model valid for all underlying Lie groups. Embedding of the affine Toda model in the conformal affine Toda model plays a crucial role in this analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Factors influencing the location decisions of offices include traffic, accessibility, employment conditions, economic prospects and land-use policies. Hence tools for supporting real-estate managers and urban planners in such multidimensional decisions may be useful. Accordingly, the objective of this study is to develop a GIS-based tool to support firms who seek office accommodation within a given regional or national study area. The tool relies on a matching approach, in which a firm's characteristics (demand) on the one hand, and environmental conditions and available office spaces (supply) on the other, are analyzed separately in a first step, after which a match is sought. That is, a suitability score is obtained for every firm and for every available office space by applying some value judgments (satisfaction, utility etc.). The latter are powered by a focus on location aspects and expert knowledge about the location decisions of firms/organizations with respect to office accommodation as acquired from a group of real-estate advisers; it is stored in decision tables, and they constitute the core of the model. Apart from the delineation of choice sets for any firm seeking a location, the tool supports two additional types of queries. Firstly, it supports the more generic problem of optimally allocating firms to a set of vacant locations. Secondly, the tool allows users to find firms which meet the characteristics of any given location. Moreover, as a GIS-based tool, its results can be visualized using GIS features which, in turn, facilitate several types of analyses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Large gene expression studies, such as those conducted using DNA arrays, often provide millions of different pieces of data. To address the problem of analyzing such data, we describe a statistical method, which we have called ‘gene shaving’. The method identifies subsets of genes with coherent expression patterns and large variation across conditions. Gene shaving differs from hierarchical clustering and other widely used methods for analyzing gene expression studies in that genes may belong to more than one cluster, and the clustering may be supervised by an outcome measure. The technique can be ‘unsupervised’, that is, the genes and samples are treated as unlabeled, or partially or fully supervised by using known properties of the genes or samples to assist in finding meaningful groupings. Results: We illustrate the use of the gene shaving method to analyze gene expression measurements made on samples from patients with diffuse large B-cell lymphoma. The method identifies a small cluster of genes whose expression is highly predictive of survival. Conclusions: The gene shaving method is a potentially useful tool for exploration of gene expression data and identification of interesting clusters of genes worth further investigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hundreds of Terabytes of CMS (Compact Muon Solenoid) data are being accumulated for storage day by day at the University of Nebraska-Lincoln, which is one of the eight US CMS Tier-2 sites. Managing this data includes retaining useful CMS data sets and clearing storage space for newly arriving data by deleting less useful data sets. This is an important task that is currently being done manually and it requires a large amount of time. The overall objective of this study was to develop a methodology to help identify the data sets to be deleted when there is a requirement for storage space. CMS data is stored using HDFS (Hadoop Distributed File System). HDFS logs give information regarding file access operations. Hadoop MapReduce was used to feed information in these logs to Support Vector Machines (SVMs), a machine learning algorithm applicable to classification and regression which is used in this Thesis to develop a classifier. Time elapsed in data set classification by this method is dependent on the size of the input HDFS log file since the algorithmic complexities of Hadoop MapReduce algorithms here are O(n). The SVM methodology produces a list of data sets for deletion along with their respective sizes. This methodology was also compared with a heuristic called Retention Cost which was calculated using size of the data set and the time since its last access to help decide how useful a data set is. Accuracies of both were compared by calculating the percentage of data sets predicted for deletion which were accessed at a later instance of time. Our methodology using SVMs proved to be more accurate than using the Retention Cost heuristic. This methodology could be used to solve similar problems involving other large data sets.