860 resultados para comparison method
Resumo:
The purpose of this study was to compare a number of state-of-the-art methods in airborne laser scan- ning (ALS) remote sensing with regards to their capacity to describe tree size inequality and other indi- cators related to forest structure. The indicators chosen were based on the analysis of the Lorenz curve: Gini coefficient ( GC ), Lorenz asymmetry ( LA ), the proportions of basal area ( BALM ) and stem density ( NSLM ) stocked above the mean quadratic diameter. Each method belonged to one of these estimation strategies: (A) estimating indicators directly; (B) estimating the whole Lorenz curve; or (C) estimating a complete tree list. Across these strategies, the most popular statistical methods for area-based approach (ABA) were used: regression, random forest (RF), and nearest neighbour imputation. The latter included distance metrics based on either RF (NN–RF) or most similar neighbour (MSN). In the case of tree list esti- mation, methods based on individual tree detection (ITD) and semi-ITD, both combined with MSN impu- tation, were also studied. The most accurate method was direct estimation by best subset regression, which obtained the lowest cross-validated coefficients of variation of their root mean squared error CV(RMSE) for most indicators: GC (16.80%), LA (8.76%), BALM (8.80%) and NSLM (14.60%). Similar figures [CV(RMSE) 16.09%, 10.49%, 10.93% and 14.07%, respectively] were obtained by MSN imputation of tree lists by ABA, a method that also showed a number of additional advantages, such as better distributing the residual variance along the predictive range. In light of our results, ITD approaches may be clearly inferior to ABA with regards to describing the structural properties related to tree size inequality in for- ested areas.
Resumo:
The aim of this study was to compare the race characteristics of the start and turn segments of national and regional level swimmers. In the study, 100 and 200-m events were analysed during the finals session of the Open Comunidad de Madrid (Spain) tournament. The “individualized-distance” method with two-dimensional direct linear transformation algorithm was used to perform race analyses. National level swimmers obtained faster velocities in all race segments and stroke comparisons,although significant inter-level differences in start velocity were only obtained in half (8 out of 16) of the analysed events. Higher level swimmers also travelled for longer start and turn distances but only in the race segments where the gain of speed was high. This was observed in the turn segments, in the backstroke and butterfly strokes and during the 200-m breaststroke event, but not in any of the freestyle events. Time improvements due to the appropriate extension of the underwater subsections appeared to be critical for the end race result and should be carefully evaluated by the “individualized-distance” method.
Resumo:
A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.
Resumo:
This paper presents a dynamic LM adaptation based on the topic that has been identified on a speech segment. We use LSA and the given topic labels in the training dataset to obtain and use the topic models. We propose a dynamic language model adaptation to improve the recognition performance in "a two stages" AST system. The final stage makes use of the topic identification with two variants: the first on uses just the most probable topic and the other one depends on the relative distances of the topics that have been identified. We perform the adaptation of the LM as a linear interpolation between a background model and topic-based LM. The interpolation weight id dynamically adapted according to different parameters. The proposed method is evaluated on the Spanish partition of the EPPS speech database. We achieved a relative reduction in WER of 11.13% over the baseline system which uses a single blackground LM.
Resumo:
Comparación de los esquemas de integración temporal explícito e implícito, en la simulación del flujo sanguíneo y su interacción con la pared arterial. There are two major strategies in FSI coupling techniques: implicit and explicit. The general difference between these methodologies is how many times the data is exchanged between the fluid and solid domains at each FSI time-step. In both coupling strategies, the pressure values coming from fluid domain calculations at each time-step are exported to the solid domain, and consequently, the solid domain is analyzed with these imported forces. In contrast to the explicit coupling, in the implicit approach the fluid and solid domain’s data is exchanged several times until the convergence is achieved. Although this method may boost the numerical stabilization, it increases the computational cost due to the extra data exchanges. In cardiovascular simulations, depending on the analysis objectives, one may choose an explicit or implicit approach. In the current work, the advantage of an explicit coupling strategy is highlighted when simulation of pulsatile blood flow in elastic arteries is desired.
Resumo:
The purpose of the research work resulting from various studies undertaken in the CEDEX, as summarized in this article, is to make a comparative analysis of methods for calculating overtopping rates developed by different authors. To this effect, in the first place, existing formulae for estimating the overtopping rate on rubble mound and vertical breakwaters were summarised and analysed. Later, the above mentioned formulae were compared using the results obtained in a series of hydraulic model tests at the CEDEX (the Center of Studies of Ports and Coasts of the CEDEX, Madrid, Spain). A calculation method based on the neural network theory, developed in the European CLASH Project, was applied to a series of sloping breakwater tests in order to complete this research. The results obtained in the Ferrol, Ciervana and Alicante breakwaters tests are presented here.
Resumo:
This paper presents a new selective and non-directional protection method to detect ground faults in neutral isolated power systems. The new proposed method is based on the comparison of the rms value of the residual current of all the lines connected to a bus, and it is able to determine the line with ground defect. Additionally, this method can be used for the protection of secondary substation. This protection method avoids the unwanted trips produced by wrong settings or wiring errors, which sometimes occur in the existing directional ground fault protections. This new method has been validated through computer simulations and experimental laboratory tests.
Resumo:
The location of ground faults in railway electric lines in 2 × 5 kV railway power supply systems is a difficult task. In both 1 × 25 kV and transmission power systems it is common practice to use distance protection relays to clear ground faults and localize their positions. However, in the particular case of this 2 × 25 kV system, due to the widespread use of autotransformers, the relation between the distance and the impedance seen by the distance protection relays is not linear and therefore the location is not accurate enough. This paper presents a simple and economical method to identify the subsection between autotransformers and the conductor (catenary or feeder) where the ground fault is happening. This method is based on the comparison of the angle between the current and the voltage of the positive terminal in each autotransformer. Consequently, after the identification of the subsection and the conductor with the ground defect, only the subsection where the ground fault is present will be quickly removed from service, with the minimum effect on rail traffic. This method has been validated through computer simulations and laboratory tests with positive results.
Resumo:
In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.
Resumo:
Is the mechanical unraveling of protein domains by atomic force microscopy (AFM) just a technological feat or a true measurement of their unfolding? By engineering a protein made of tandem repeats of identical Ig modules, we were able to get explicit AFM data on the unfolding rate of a single protein domain that can be accurately extrapolated to zero force. We compare this with chemical unfolding rates for untethered modules extrapolated to 0 M denaturant. The unfolding rates obtained by the two methods are the same. Furthermore, the transition state for unfolding appears at the same position on the folding pathway when assessed by either method. These results indicate that mechanical unfolding of a single protein by AFM does indeed reflect the same event that is observed in traditional unfolding experiments. The way is now open for the extensive use of AFM to measure folding reactions at the single-molecule level. Single-molecule AFM recordings have the added advantage that they define the reaction coordinate and expose rare unfolding events that cannot be observed in the absence of chemical denaturants.
Resumo:
This paper decomposes the conventional measure of selection bias in observational studies into three components. The first two components are due to differences in the distributions of characteristics between participant and nonparticipant (comparison) group members: the first arises from differences in the supports, and the second from differences in densities over the region of common support. The third component arises from selection bias precisely defined. Using data from a recent social experiment, we find that the component due to selection bias, precisely defined, is smaller than the first two components. However, selection bias still represents a substantial fraction of the experimental impact estimate. The empirical performance of matching methods of program evaluation is also examined. We find that matching based on the propensity score eliminates some but not all of the measured selection bias, with the remaining bias still a substantial fraction of the estimated impact. We find that the support of the distribution of propensity scores for the comparison group is typically only a small portion of the support for the participant group. For values outside the common support, it is impossible to reliably estimate the effect of program participation using matching methods. If the impact of participation depends on the propensity score, as we find in our data, the failure of the common support condition severely limits matching compared with random assignment as an evaluation estimator.
Resumo:
We describe a genome-wide characterization of mRNA transcript levels in yeast grown on the fatty acid oleate, determined using Serial Analysis of Gene Expression (SAGE). Comparison of this SAGE library with that reported for glucose grown cells revealed the dramatic adaptive response of yeast to a change in carbon source. A major fraction (>20%) of the 15,000 mRNA molecules in a yeast cell comprised differentially expressed transcripts, which were derived from only 2% of the total number of ∼6300 yeast genes. Most of the mRNAs that were differentially expressed code for enzymes or for other proteins participating in metabolism (e.g., metabolite transporters). In oleate-grown cells, this was exemplified by the huge increase of mRNAs encoding the peroxisomal β-oxidation enzymes required for degradation of fatty acids. The data provide evidence for the existence of redox shuttles across organellar membranes that involve peroxisomal, cytoplasmic, and mitochondrial enzymes. We also analyzed the mRNA profile of a mutant strain with deletions of the PIP2 and OAF1 genes, encoding transcription factors required for induction of genes encoding peroxisomal proteins. Induction of genes under the immediate control of these factors was abolished; other genes were up-regulated, indicating an adaptive response to the changed metabolism imposed by the genetic impairment. We describe a statistical method for analysis of data obtained by SAGE.
Resumo:
A strategy of "sequence scanning" is proposed for rapid acquisition of sequence from clones such as bacteriophage P1 clones, cosmids, or yeast artificial chromosomes. The approach makes use of a special vector, called LambdaScan, that reliably yields subclones with inserts in the size range 8-12 kb. A number of subclones, typically 96 or 192, are chosen at random, and the ends of the inserts are sequenced using vector-specific primers. Then long-range spectrum PCR is used to order and orient the clones. This combination of shotgun and directed sequencing results in a high-resolution physical map suitable for the identification of coding regions or for comparison of sequence organization among genomes. Computer simulations indicate that, for a target clone of 100 kb, the scanning of 192 subclones with sequencing reads as short as 350 bp results in an approximate ratio of 1:2:1 of regions of double-stranded sequence, single-stranded sequence, and gaps. Longer sequencing reads tip the ratio strongly toward increased double-stranded sequence.
Resumo:
Objective: To document the course of psychological symptomology, mental health treatment, and unmet psychological needs using caregiver reports in the first 18 months following pediatric brain injury (BI). Method: Participants included 28 children (aged 1-18 years) who were hospitalized at a children's hospital's rehabilitation unit. Caregiver reports of children's psychological symptoms, receipt of mental health treatment, and unmet psychological needs were assessed at one month, six months, 12 months, and 18 months post-BI. Results: Caregivers reported a general increase in psychological symptoms and receipt of mental health treatment over the 18 months following BI; however, there was a substantial gap between the high rate of reported symptoms and low rate of reported treatment. Across all four follow-up time points there were substantial unmet psychological needs (at least 60% of sample). Conclusions: Findings suggest that there are substantial unmet psychological needs among children during the first 18 months after BI. Barriers to mental health treatment for this population need to be addressed.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.