974 resultados para Point method
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.
Resumo:
To translate and transfer solution data between two totally different meshes (i.e. mesh 1 and mesh 2), a consistent point-searching algorithm for solution interpolation in unstructured meshes consisting of 4-node bilinear quadrilateral elements is presented in this paper. The proposed algorithm has the following significant advantages: (1) The use of a point-searching strategy allows a point in one mesh to be accurately related to an element (containing this point) in another mesh. Thus, to translate/transfer the solution of any particular point from mesh 2 td mesh 1, only one element in mesh 2 needs to be inversely mapped. This certainly minimizes the number of elements, to which the inverse mapping is applied. In this regard, the present algorithm is very effective and efficient. (2) Analytical solutions to the local co ordinates of any point in a four-node quadrilateral element, which are derived in a rigorous mathematical manner in the context of this paper, make it possible to carry out an inverse mapping process very effectively and efficiently. (3) The use of consistent interpolation enables the interpolated solution to be compatible with an original solution and, therefore guarantees the interpolated solution of extremely high accuracy. After the mathematical formulations of the algorithm are presented, the algorithm is tested and validated through a challenging problem. The related results from the test problem have demonstrated the generality, accuracy, effectiveness, efficiency and robustness of the proposed consistent point-searching algorithm. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
Recent studies have demonstrated that spatial patterns of fMRI BOLD activity distribution over the brain may be used to classify different groups or mental states. These studies are based on the application of advanced pattern recognition approaches and multivariate statistical classifiers. Most published articles in this field are focused on improving the accuracy rates and many approaches have been proposed to accomplish this task. Nevertheless, a point inherent to most machine learning methods (and still relatively unexplored in neuroimaging) is how the discriminative information can be used to characterize groups and their differences. In this work, we introduce the Maximum Uncertainty Linear Discrimination Analysis (MLDA) and show how it can be applied to infer groups` patterns by discriminant hyperplane navigation. In addition, we show that it naturally defines a behavioral score, i.e., an index quantifying the distance between the states of a subject from predefined groups. We validate and illustrate this approach using a motor block design fMRI experiment data with 35 subjects. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
To obtain a high quality EMG acquisition, the signal must be recorded as far away as possible from muscle innervations and tendon zones, which are known to shift during dynamic contractions. This study describes a methodology, using commercial bipolar electrodes, to identify better electrode positions for superficial EMG of lower limb muscles during dynamic contractions. Eight female volunteers participated in this study. Myoelectric signals of the vastus lateralis, gastrocnemius medialis, peroneus longus and tibialis anterior muscles were acquired during maximum isometric contractions using bipolar electrodes. The electrode positions of each muscle were selected assessing SENIAM and then, other positions were located along the length of muscle up and down the SENIAM site. The raw signal (density), the linear envelopes, the RMS value, the motor point site, the position of the IZ and its shift during dynamic contractions were taken into account to select and compare electrode positions. For vastus lateralis and peroneus longus, the best sites were 66% and 25% of muscle length, respectively (similar to SENIAM location). The position of the tibialis anterior electrodes presented the best signal at 47.5% of its length (different from SENIAM location). The position of the gastrocnemius medialis electrodes was at 38% of its length and SENIAM does not specify a precise location for signal acquisition. The proposed method should be considered as another methodological step in every EMG study to guarantee the quality of the signal and subsequent human movement interpretations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
It is difficult to precisely measure articular arc movement in newborns using a goniometer. This article proposes an objective method based on trigonometry for the evaluation of lower limb abduction. With the newborn aligned in the dorsal decubitus position, 2 points are marked at the level of the medial malleolus, one on the sagittal line and the other at the end of the abduction. Using the right-sided line between these 2 points and a line from the medial malleolus to the reference point at the anterior superior iliac spine or umbilical scar, an isosceles triangle is drawn, and half of the inferential abduction angle is obtained by calculating the sine. Twenty healthy full-term newborns comprise the study cohort. Intersubject and intrasubject variability among the abduction angle values (mean [SD], 37 degrees [4]degrees) is low. This method is advantageous because the measurement is precise and because the sine can be used without approximation.
Resumo:
This work aimed at an evaluation of the classical iodine method for quantification of vitamin C (L-ascorbic acid) in fruit juices, as well as at a search into the stability of this so popular vitamin under different conditions of pH, temperature and light exposition, in addition to a proposal of a new quantification method. Our results point to the persistent reversibility of the blue color of the starch-triiodide complex at the end point when using the classical iodine titration, and the overestimation of the true vitamin concentration in fruit juices. A new quantification method is proposed in order to overcome this problem. Surprising conclusions were obtained regarding the controversial stability of L-ascorbic acid toward atmospheric oxygen, at low pH, even in fruit juice and at room temperature, showing that the major problem concerned with aging of fruit juices is proliferation of microorganisms rather than expontaneous oxidation of L-ascorbic acid.
Resumo:
We obtain the finite-temperature unconditional master equation of the density matrix for two coupled quantum dots (CQD's) when one dot is subjected to a measurement of its electron occupation number using a point contact (PC). To determine how the CQD system state depends on the actual current through the PC device, we use the so-called quantum trajectory method to derive the zero-temperature conditional master equation. We first treat the electron tunneling through the PC barrier as a classical stochastic point process (a quantum-jump model). Then we show explicitly that our results can be extended to the quantum-diffusive limit when the average electron tunneling rate is very large compared to the extra change of the tunneling rate due to the presence of the electron in the dot closer to the PC. We find that in both quantum-jump and quantum-diffusive cases, the conditional dynamics of the CQD system can be described by the stochastic Schrodinger equations for its conditioned state vector if and only if the information carried away from the CQD system by the PC reservoirs can be recovered by the perfect detection of the measurements.
Resumo:
Hereditary nonpolyposis colorectal cancer syndrome (HNPCC) is an autosomal dominant condition accounting for 2–5% of all colorectal carcinomas as well as a small subset of endometrial, upper urinary tract and other gastrointestinal cancers. An assay to detect the underlying defect in HNPCC, inactivation of a DNA mismatch repair enzyme, would be useful in identifying HNPCC probands. Monoclonal antibodies against hMLH1 and hMSH2, two DNA mismatch repair proteins which account for most HNPCC cancers, are commercially available. This study sought to investigate the potential utility of these antibodies in determining the expression status of these proteins in paraffin-embedded formalin-fixed tissue and to identify key technical protocol components associated with successful staining. A set of 20 colorectal carcinoma cases of known hMLH1 and hMSH2 mutation and expression status underwent immunoperoxidase staining at multiple institutions, each of which used their own technical protocol. Staining for hMSH2 was successful in most laboratories while staining for hMLH1 proved problematic in multiple labs. However, a significant minority of laboratories demonstrated excellent results including high discriminatory power with both monoclonal antibodies. These laboratories appropriately identified hMLH1 or hMSH2 inactivation with high sensitivity and specificity. The key protocol point associated with successful staining was an antigen retrieval step involving heat treatment and either EDTA or citrate buffer. This study demonstrates the potential utility of immunohistochemistry in detecting HNPCC probands and identifies key technical components for successful staining.
Resumo:
A new cloud-point extraction and preconcentration method, using a cationic, surfactant, Aliquat-336 (tricaprylyl-methy;ammonium chloride), his-been developed for the determination of cyanobacterial toxins, microcystins, in natural waters. Sodium sulfate was used to induce phase separation at 25 degreesC. The phase behavior of Aliquat-336 with respect to concentration of Na2SO4 was studied. The cloud-point system revealed a very high phase volume ratio compared to other established systems of nonionic, anionic, and cationic surfactants: At pH 6-7, it showed an outstanding selectivity in ahalyte extraction for anionic species. Only MC-LR and MC-YR, which are known to be predominantly anionic, were extracted (with averaged recoveries of 113.9 +/- 9% and 87.1 +/- 7%, respectively). MC-RR, which is likely to be amphoteric at the above pH range, was. not cle tectable in.the extract. Coupled to HPLC/UV separation and detection, the cloud-point extraction method (with 2.5 mM Aliquat-336 and 75 mM Na2SO4 at 25 degreesC) offered detection limits of 150 +/- 7 and 470 +/- 72 pg/mL for MC-LR and MC-YR, respectively, in 25 mL of deionized water. Repeatability of the method was 7.6% for MC-LR and 7.3% for MC-YR: The cloud-point extraction process can be. completed within 10-15 min with no cleanup steps required. Applicability of the new method to the determination of microcystins in real samples was demonstrated using natural surface waters, collected from a local river and a local duck pond spiked with realistic. concentrations of microcystins. Effects of salinity and organic matter (TOC) content in the water sample on the extraction efficiency were also studied.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
This paper aims to present a contrastive approach between three different ways of building concepts after proving the similar syntactic possibilities that coexist in terms. However, from the semantic point of view we can see that each language family has a different distribution in meaning. But the most important point we try to show is that the differences found in the psychological process when communicating concepts should guide the translator and the terminologist in the target text production and the terminology planning process. Differences between languages in the information transmission process are due to the different roles the different types of knowledge play. We distinguish here the analytic-descriptive knowledge and the analogical knowledge among others. We also state that none of them is the best when determining the correctness of a term, but there has to be adequacy criteria in the selection process. This concept building or term building success is important when looking at the linguistic map of the information society.
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.