905 resultados para Negative dimensional integration method (NDIM)
Resumo:
Integration of inputs by cortical neurons provides the basis for the complex information processing performed in the cerebral cortex. Here, we propose a new analytic framework for understanding integration within cortical neuronal receptive fields. Based on the synaptic organization of cortex, we argue that neuronal integration is a systems--level process better studied in terms of local cortical circuitry than at the level of single neurons, and we present a method for constructing self-contained modules which capture (nonlinear) local circuit interactions. In this framework, receptive field elements naturally have dual (rather than the traditional unitary influence since they drive both excitatory and inhibitory cortical neurons. This vector-based analysis, in contrast to scalarsapproaches, greatly simplifies integration by permitting linear summation of inputs from both "classical" and "extraclassical" receptive field regions. We illustrate this by explaining two complex visual cortical phenomena, which are incompatible with scalar notions of neuronal integration.
Resumo:
In this paper, we develop a novel index structure to support efficient approximate k-nearest neighbor (KNN) query in high-dimensional databases. In high-dimensional spaces, the computational cost of the distance (e.g., Euclidean distance) between two points contributes a dominant portion of the overall query response time for memory processing. To reduce the distance computation, we first propose a structure (BID) using BIt-Difference to answer approximate KNN query. The BID employs one bit to represent each feature vector of point and the number of bit-difference is used to prune the further points. To facilitate real dataset which is typically skewed, we enhance the BID mechanism with clustering, cluster adapted bitcoder and dimensional weight, named the BID⁺. Extensive experiments are conducted to show that our proposed method yields significant performance advantages over the existing index structures on both real life and synthetic high-dimensional datasets.
Resumo:
Objective: to determine the palm-plant paleness’ characteristics in Colombian infant rural population, as a diagnostic method of anemia, and to establish a correlation between the finding of palm-plant paleness and the Hematocrit values. Methodology: a cross sectional study was used to evaluate 169 boys and girls, between 2 months and 12 years old, of the rural area of San Vicente del Caguan, who entered into a Health Campaign. Following the signature of an informed consent, parents accept their children to participate in the study. Those with acute or chronic pathologies were excluded. The presence of palm-plant paleness was determined by observers trained in the Integrated Management of Childhood Illness (IMCI) Strategy. Hematocrit was measured to all children, as well as a peripheral blood smear. Interrater concordance evaluation (Kappa index) was determined through a pilot test and a validation (sensitivity, specificity) was performed, using Hematocrit as the standard. Results: 93 of the participants were male and 77 were female. 45% of them had palm paleness. The Hematocrit showed anemia in 34.1% of the children. The validation analysis demonstrated a 67.2% of sensibility, a 66.6% of specificity, a 51.3% of positive predictive values and a 79.5% of negative predictive values. Hypochromic and Eosinophilia were found in most of the peripheral blood smears’ children with anemia. Conclusions: although this tool presents a low sensibility and specificity for low/moderated anemia, it is useful for excluding it in infants without palm paleness.
Resumo:
Introducción: La atención de calidad en urgencias sólo es posible si los médicos han recibido una enseñanza de alta calidad. La escala PHEEM (Postgraduate Hospital Educational Environment Measure) es un instrumento válido y fiable, utilizado internacionalmente para medir el entorno educativo, en la formación médica de posgrado. Materiales y métodos: Estudio de corte trasversal que utilizó la escala PHEEM versión en español para conocer el entorno educativo de los programas de urgencias. El coeficiente alfa de Cronbach se calculó para determinar la consistencia interna. Se aplicó estadística descriptiva a nivel global, por categorías e ítems de la escala PHEEM y se compararon resultados por sexo, año de residencia y programa. Resultados: 94 (94%) residentes llenaron el cuestionario. La puntuación media de la escala PHEEM fue 93,91 ± 23,71 (58,1% de la puntuación máxima) que se considera un ambiente educativo más positivo que negativo, pero con margen de mejora. Hubo una diferencia estadísticamente significativa en la percepción del ambiente educativo entre los programas de residencia (p =0,01). El instrumento es altamente confiable (alfa de Cronbach = 0,952). La barrera más frecuente en la enseñanza fue el hacinamiento y la evaluación fue percibida con el propósito de cumplir normas. Discusión: Los resultados de este estudio aportaron evidencia sobre la validez interna de la escala PHEEM en el contexto colombiano. Este estudio demostró cómo la medición del ambiente educativo en una especialidad médico-quirúrgica, con el uso de una herramienta cuantitativa, puede proporcionar información en relación a las fortalezas y debilidades de los programas.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms
Resumo:
The occurrence of negative values for Fukui functions was studied through the electronegativity equalization method. Using algebraic relations between Fukui functions and different other conceptual DFT quantities on the one hand and the hardness matrix on the other hand, expressions were obtained for Fukui functions for several archetypical small molecules. Based on EEM calculations for large molecular sets, no negative Fukui functions were found
Resumo:
A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants
Resumo:
The total energy of molecule in terms of 'fuzzy atoms' presented as sum of one- and two-atomic energy components is described. The divisions of three-dimensional physical space into atomic regions exhibit continuous transition from one to another. The energy components are on chemical energy scale according to proper definitions. The Becke's integration scheme and weight function determines realization of method which permits effective numerical integrations
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
An instrument is described which carries three orthogonal geomagnetic field sensors on a standard meteorological balloon package, to sense rapid motion and position changes during ascent through the atmosphere. Because of the finite data bandwidth available over the UHF radio link, a burst sampling strategy is adopted. Bursts of 9s of measurements at 3.6Hz are interleaved with periods of slow data telemetry lasting 25s. Calculation of the variability in each channel is used to determine position changes, a method robust to periods of poor radio signals. During three balloon ascents, variability was found repeatedly at similar altitudes, simultaneously in each of three orthogonal sensors carried. This variability is attributed to atmospheric motions. It is found that the vertical sensor is least prone to stray motions, and that the use of two horizontal sensors provides no additional information over a single horizontal sensor
Resumo:
Using a novel numerical method at unprecedented resolution, we demonstrate that structures of small to intermediate scale in rotating, stratified flows are intrinsically three-dimensional. Such flows are characterized by vortices (spinning volumes of fluid), regions of large vorticity gradients, and filamentary structures at all scales. It is found that such structures have predominantly three-dimensional dynamics below a horizontal scale LLR, where LR is the so-called Rossby radius of deformation, equal to the characteristic vertical scale of the fluid H divided by the ratio of the rotational and buoyancy frequencies f/N. The breakdown of two-dimensional dynamics at these scales is attributed to the so-called "tall-column instability" [D. G. Dritschel and M. de la Torre Juárez, J. Fluid. Mech. 328, 129 (1996)], which is active on columnar vortices that are tall after scaling by f/N, or, equivalently, that are narrow compared with LR. Moreover, this instability eventually leads to a simple relationship between typical vertical and horizontal scales: for each vertical wave number (apart from the vertically averaged, barotropic component of the flow) the average horizontal wave number is equal to f/N times the vertical wave number. The practical implication is that three-dimensional modeling is essential to capture the behavior of rotating, stratified fluids. Two-dimensional models are not valid for scales below LR. ©1999 American Institute of Physics.
Resumo:
We consider the problem of determining the pressure and velocity fields for a weakly compressible fluid flowing in a two-dimensional reservoir in an inhomogeneous, anisotropic porous medium, with vertical side walls and variable upper and lower boundaries, in the presence of vertical wells injecting or extracting fluid. Numerical solution of this problem may be expensive, particularly in the case that the depth scale of the layer h is small compared to the horizontal length scale l. This is a situation which occurs frequently in the application to oil reservoir recovery. Under the assumption that epsilon=h/l<<1, we show that the pressure field varies only in the horizontal direction away from the wells (the outer region). We construct two-term asymptotic expansions in epsilon in both the inner (near the wells) and outer regions and use the asymptotic matching principle to derive analytical expressions for all significant process quantities. This approach, via the method of matched asymptotic expansions, takes advantage of the small aspect ratio of the reservoir, epsilon, at precisely the stage where full numerical computations become stiff, and also reveals the detailed structure of the dynamics of the flow, both in the neighborhood of wells and away from wells.
Resumo:
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
Resumo:
This paper aims to summarise the current performance of ozone data assimilation (DA) systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP) systems based on general circulation models (GCMs); the other five use chemistry transport models (CTMs). The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography) observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion) or the system (CTM or NWP system). Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa), some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry. These results indicate that when good observations are available for assimilation, the first priority for improving ozone DA systems is to improve the models. The analyses benefit strongly from the good quality of the MIPAS ozone observations. Using the analyses as a transfer standard, it is seen that MIPAS is similar to 5% higher than HALOE (Halogen Occultation Experiment) in the mid and upper stratosphere and mesosphere (above 30 hPa), and of order 10% higher than ozonesonde and HALOE in the lower stratosphere (100 hPa to 30 hPa). Analyses based on SCIAMACHY total column are almost as good as the MIPAS analyses; analyses based on SCIAMACHY limb profiles are worse in some areas, due to problems in the SCIAMACHY retrievals.
Resumo:
Differential protein expression analysis based on modification of selected amino acids with labelling reagents has become the major method of choice for quantitative proteomics. One such methodology, two-dimensional difference gel electrophoresis (2-D DIGE), uses a matched set of fluorescent N-hydroxysuccinimidyl (NHS) ester cyanine dyes to label lysine residues in different samples which can be run simultaneously on the same gels. Here we report the use of iodoacetylated cyanine (ICy) dyes (for labelling of cysteine thiols, for 2-D DIGE-based redox proteomics. Characterisation of ICy dye labelling in relation to its stoichiometry, sensitivity and specificity is described, as well as comparison of ICy dye with NHS-Cy dye labelling and several protein staining methods. We have optimised conditions for labelling of nonreduced, denatured samples and report increased sensitivity for a subset of thiol-containing proteins, allowing accurate monitoring of redox-dependent thiol modifications and expression changes. Cysteine labelling was then combined with lysine labelling in a multiplex 2-D DIGE proteomic study of redox-dependent and ErbB2-dependent changes in epithelial cells exposed to oxidative stress. This study identifies differentially modified proteins involved in cellular redox regulation, protein folding, proliferative suppression, glycolysis and cytoskeletal organisation, revealing the complexity of the response to oxidative stress and the impact that overexpression of ErbB2 has on this response.