989 resultados para Incremental Launching Method
Resumo:
Ao longo desta dissertação, é abordada a temática das obras de arte, focando-se um processo construtivo em particular, que é o Método de Lançamento Incremental. Começa-se por um enquadramento geral da temática das obras de arte, sendo feita a sua descrição, e faz-se uma síntese histórica dos materiais utilizados nas mesmas. De seguida, são apresentados os tipos de tabuleiros existentes e as tipologias estruturais das obras de arte. São mencionados ainda os processos e equipamentos construtivos que são utilizados na sua construção. É, de seguida, feita uma abordagem mais profunda ao processo construtivo alvo desta dissertação, nomeadamente questões de índole prática e de dimensionamento. É feita ainda uma aplicação prática, sendo feito um Estudo Prévio de uma solução para uma obra de arte executada com este processo construtivo. Termina-se indicando aspetos importantes na monitorização das obras de arte executadas pelo processo construtivo alvo desta dissertação, sendo ainda apresentadas as conclusões a que se chegou no final da mesma e possíveis desenvolvimentos futuros.
Resumo:
To enhance the global search ability of population based incremental learning (PBIL) methods, it is proposed that multiple probability vectors are to be included on available PBIL algorithms. The strategy for updating those probability vectors and the negative learning and mutation operators are thus re-defined correspondingly. Moreover, to strike the best tradeoff between exploration and exploitation searches, an adaptive updating strategy for the learning rate is designed. Numerical examples are reported to demonstrate the pros and cons of the newly implemented algorithm.
Resumo:
To enhance the global search ability of Population Based Incremental Learning (PBIL) methods, It Is proposed that multiple probability vectors are to be Included on available PBIL algorithms. As a result, the strategy for updating those probability vectors and the negative learning and mutation operators are redefined as reported. Numerical examples are reported to demonstrate the pros and cons of the newly Implemented algorithm. ©2006 IEEE.
Resumo:
Includes bibliographical references.
Resumo:
The objective of the study presented in this report was to document the launch of the Iowa River Bridge and to monitor and evaluate the structural performance of the bridge superstructure and substructure during the launch. The Iowa Department of Transportation used an incremental launching method, which is relatively unique for steel I-girder bridges, to construct the Iowa River Bridge over an environmentally sensitive river valley in central Iowa. The bridge was designed as two separate roadways consisting of four steel plate girders each that are approximately 11 ft deep and span approximately 301 ft each over five spans. The concrete bridge deck was not placed until after both roadways had been launched. One of the most significant monitoring and evaluation observations related to the superstructure was that the bottom flange (and associated web region) was subjected to extremely large stresses during the crossing of launch rollers. Regarding the substructure performance, the column stresses did not exceed reasonable design limits during the daylong launches. The scope of the study did not allow adequate quantification of the measured applied launch forces at the piers. Future proposed esearch should provide an opportunity to address this. The overall experimental performance of the bridge during the launch was compared with the predicted design performance. In general, the substructure design, girder contact stress, and total launching force assumptions correlated well with the experimental results. The design assumptions for total axial force in crossframe members, on the other hand, differed from the experimental results by as much as 300%.
Resumo:
Evolutionary-based algorithms play an important role in finding solutions to many problems that are not solved by classical methods, and particularly so for those cases where solutions lie within extreme non-convex multidimensional spaces. The intrinsic parallel structure of evolutionary algorithms are amenable to the simultaneous testing of multiple solutions; this has proved essential to the circumvention of local optima, and such robustness comes with high computational overhead, though custom digital processor use may reduce this cost. This paper presents a new implementation of an old, and almost forgotten, evolutionary algorithm: the population-based incremental learning method. We show that the structure of this algorithm is well suited to implementation within programmable logic, as compared with contemporary genetic algorithms. Further, the inherent concurrency of our FPGA implementation facilitates the integration and testing of micro-populations.
Resumo:
Most existing color-based tracking algorithms utilize the statistical color information of the object as the tracking clues, without maintaining the spatial structure within a single chromatic image. Recently, the researches on the multilinear algebra provide the possibility to hold the spatial structural relationship in a representation of the image ensembles. In this paper, a third-order color tensor is constructed to represent the object to be tracked. Considering the influence of the environment changing on the tracking, the biased discriminant analysis (BDA) is extended to the tensor biased discriminant analysis (TBDA) for distinguishing the object from the background. At the same time, an incremental scheme for the TBDA is developed for the tensor biased discriminant subspace online learning, which can be used to adapt to the appearance variant of both the object and background. The experimental results show that the proposed method can track objects precisely undergoing large pose, scale and lighting changes, as well as partial occlusion. © 2009 Elsevier B.V.
Resumo:
This study sought to analyse the behaviour of the average spinal posture using a novel investigative procedure in a maximal incremental effort test performed on a treadmill. Spine motion was collected via stereo-photogrammetric analysis in thirteen amateur athletes. At each time percentage of the gait cycle, the reconstructed spine points were projected onto the sagittal and frontal planes of the trunk. On each plane, a polynomial was fitted to the data, and the two-dimensional geometric curvature along the longitudinal axis of the trunk was calculated to quantify the geometric shape of the spine. The average posture presented at the gait cycle defined the spine Neutral Curve. This method enabled the lateral deviations, lordosis, and kyphosis of the spine to be quantified noninvasively and in detail. The similarity between each two volunteers was a maximum of 19% on the sagittal plane and 13% on the frontal (p<0.01). The data collected in this study can be considered preliminary evidence that there are subject-specific characteristics in spinal curvatures during running. Changes induced by increases in speed were not sufficient for the Neutral Curve to lose its individual characteristics, instead behaving like a postural signature. The data showed the descriptive capability of a new method to analyse spinal postures during locomotion; however, additional studies, and with larger sample sizes, are necessary for extracting more general information from this novel methodology.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
Incremental parsing has long been recognized as a technique of great utility in the construction of language-based editors, and correspondingly, the area currently enjoys a mature theory. Unfortunately, many practical considerations have been largely overlooked in previously published algorithms. Many user requirements for an editing system necessarily impact on the design of its incremental parser, but most approaches focus only on one: response time. This paper details an incremental parser based on LR parsing techniques and designed for use in a modeless syntax recognition editor. The nature of this editor places significant demands on the structure and quality of the document representation it uses, and hence, on the parser. The strategy presented here is novel in that both the parser and the representation it constructs are tolerant of the inevitable and frequent syntax errors that arise during editing. This is achieved by a method that differs from conventional error repair techniques, and that is more appropriate for use in an interactive context. Furthermore, the parser aims to minimize disturbance to this representation, not only to ensure other system components can operate incrementally, but also to avoid unfortunate consequences for certain user-oriented services. The algorithm is augmented with a limited form of predictive tree-building, and a technique is presented for the determination of valid symbols for menu-based insertion. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The purpose of our study was to evaluate the accuracy of dynamic incremental bolus-enhanced conventional CT (DICT) with intravenous contrast administration, early phase, in the diagnosis of malignancy of focal liver lesions. A total of 122 lesions were selected in 74 patients considering the following criteria: lesion diameter 10 mm or more, number of lesions less than six per study, except in multiple angiomatosis and the existence of a valid criteria of definitive diagnosis. Lesions were categorized into seven levels of diagnostic confidence of malignancy compared with the definitive diagnosis for acquisition of a receiver-operator-characteristic (ROC) curve analysis and to determine the sensitivity and specificity of the technique. Forty-six and 70 lesions were correctly diagnosed as malignant and benign, respectively; there were 2 false-positive and 4 false-negative diagnoses of malignancy and the sensitivity and specificity obtained were 92 and 97%. The DICT early phase was confirmed as a highly accurate method in the characterization and diagnosis of malignancy of focal liver lesions, requiring an optimal technical performance and judicious analysis of existing semiological data.
Resumo:
The present work presents a new method for activity extraction and reporting from video based on the aggregation of fuzzy relations. Trajectory clustering is first employed mainly to discover the points of entry and exit of mobiles appearing in the scene. In a second step, proximity relations between resulting clusters of detected mobiles and contextual elements from the scene are modeled employing fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the structure of the scene and characterises the ongoing different activities of the scene. Discovered activity zones can be reported as activity maps with different granularities thanks to the analysis of the transitive closure matrix. Taking advantage of the soft relation properties, activity zones and related activities can be labeled in a more human-like language. We present results obtained on real videos corresponding to apron monitoring in the Toulouse airport in France.
Resumo:
The maximal oxygen uptake (VO2max) is the maximal quantity of energy that can be produced by the aerobic metabolism in certain time unity. It can be determined direct or indirectly by predictive equations. The objective of this study was to make a specific predictive equation to determine the VO 2max from boys aged 10-16 years-old. Forty-two boys underwent a treadmill running ergospirometric test, with the initial velocity set at 9 km/h, until voluntary exhaustion. By the multiple linear regression was possible to develop the following equation for the indirect determination of the VO 2max: VO2max (ml/min) = -1574.06 + (141.38 x Vpeak) + (48.34 * Body mass), with standard error of estimate = 191.5 ml/min (4.10 ml/kg/min) and coefficient of determination = 0.934. We suggest that this formula is appropriate to predict VO2max for this population.