904 resultados para Dense Set
Resumo:
On the basis of previous works, the strange attractor in real physical systems is discussed. Louwerier attractor is used as an example to illustrate the geometric structure and dynamical properties of strange attractor. Then the strange attractor of a kind of two-dimensional map is analysed. Based on some conditions, it is proved that the closure of the unstable manifolds of hyberbolic fixed point of map is a strange attractor in real physical systems.
Resumo:
Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.
Resumo:
The main purpose of a gene interaction network is to map the relationships of the genes that are out of sight when a genomic study is tackled. DNA microarrays allow the measure of gene expression of thousands of genes at the same time. These data constitute the numeric seed for the induction of the gene networks. In this paper, we propose a new approach to build gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling. The interactions induced by the Bayesian classifiers are based both on the expression levels and on the phenotype information of the supervised variable. Feature selection and bootstrap resampling add reliability and robustness to the overall process removing the false positive findings. The consensus among all the induced models produces a hierarchy of dependences and, thus, of variables. Biologists can define the depth level of the model hierarchy so the set of interactions and genes involved can vary from a sparse to a dense set. Experimental results show how these networks perform well on classification tasks. The biological validation matches previous biological findings and opens new hypothesis for future studies
Resumo:
Six independent studies have identified linkage to chromosome 18 for developmental dyslexia or general reading ability. Until now, no candidate genes have been identified to explain this linkage. Here, we set out to identify the gene(s) conferring susceptibility by a two stage strategy of linkage and association analysis. Methodology/Principal Findings: Linkage analysis: 264 UK families and 155 US families each containing at least one child diagnosed with dyslexia were genotyped with a dense set of microsatellite markers on chromosome 18. Association analysis: Using a discovery sample of 187 UK families, nearly 3000 SNPs were genotyped across the chromosome 18 dyslexia susceptibility candidate region. Following association analysis, the top ranking SNPs were then genotyped in the remaining samples. The linkage analysis revealed a broad signal that spans approximately 40 Mb from 18p11.2 to 18q12.2. Following the association analysis and subsequent replication attempts, we observed consistent association with the same SNPs in three genes; melanocortin 5 receptor (MC5R), dymeclin (DYM) and neural precursor cell expressed, developmentally down-regulated 4-like (NEDD4L). Conclusions: Along with already published biological evidence, MC5R, DYM and NEDD4L make attractive candidates for dyslexia susceptibility genes. However, further replication and functional studies are still required.
Resumo:
We show that a set of fundamental solutions to the parabolic heat equation, with each element in the set corresponding to a point source located on a given surface with the number of source points being dense on this surface, constitute a linearly independent and dense set with respect to the standard inner product of square integrable functions, both on lateral- and time-boundaries. This result leads naturally to a method of numerically approximating solutions to the parabolic heat equation denoted a method of fundamental solutions (MFS). A discussion around convergence of such an approximation is included.
Resumo:
Let U be an open subset of a separable Banach space. Let F be the collection of all holomorphic mappings f from the open unit disc D � C into U such that f(D) is dense in U. We prove the lineability and density of F in appropriate spaces for diferent choices of U. RESUMEN. Sea U un subconjunto abierto de un espacio de Banach separable. Sea F el conjunto de funciones holomorfas f definidas en el disco unidad D del plano complejo con valores en U tales que f(D) es denso en U. En el artículo se demuestra la lineabilidad y densidad del conjunto F para diferentes elecciones de U.
Resumo:
∗ The first and third author were partially supported by National Fund for Scientific Research at the Bulgarian Ministry of Science and Education under grant MM-701/97.
Resumo:
The relationship between multiple cameras viewing the same scene may be discovered automatically by finding corresponding points in the two views and then solving for the camera geometry. In camera networks with sparsely placed cameras, low resolution cameras or in scenes with few distinguishable features it may be difficult to find a sufficient number of reliable correspondences from which to compute geometry. This paper presents a method for extracting a larger number of correspondences from an initial set of putative correspondences without any knowledge of the scene or camera geometry. The method may be used to increase the number of correspondences and make geometry computations possible in cases where existing methods have produced insufficient correspondences.
Resumo:
Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.
Resumo:
Simplified equations are derived for a granular flow in the `dense' limit where the volume fraction is close to that for dynamical arrest, and the `shallow' limit where the stream-wise length for flow development (L) is large compared with the cross-stream height (h). The mass and diameter of the particles are set equal to 1 in the analysis without loss of generality. In the dense limit, the equations are simplified by taking advantage of the power-law divergence of the pair distribution function chi proportional to (phi(ad) - phi)(-alpha), and a faster divergence of the derivativ rho(d chi/d rho) similar to (d chi/d phi), where rho and phi are the density and volume fraction, and phi(ad) is the volume fraction for arrested dynamics. When the height h is much larger than the conduction length, the energy equation reduces to an algebraic balance between the rates of production and dissipation of energy, and the stress is proportional to the square of the strain rate (Bagnold law). In the shallow limit, the stress reduces to a simplified Bagnold stress, where all components of the stress are proportional to (partial derivative u(x)/partial derivative y)(2), which is the cross-stream (y) derivative of the stream-wise (x) velocity. In the simplified equations for dense shallow flows, the inertial terms are neglected in the y momentum equation in the shallow limit because the are O(h/L) smaller than the divergence of the stress. The resulting model contains two equations, a mass conservation equations which reduces to a solenoidal condition on the velocity in the incompressible limit, and a stream-wise momentum equation which contains just one parameter B which is a combination of the Bagnold coefficients and their derivatives with respect to volume fraction. The leading-order dense shallow flow equations, as well as the first correction due to density variations, are analysed for two representative flows. The first is the development from a plug flow to a fully developed Bagnold profile for the flow down an inclined plane. The analysis shows that the flow development length is ((rho) over barh(3)/B) , where (rho) over bar is the mean density, and this length is numerically estimated from previous simulation results. The second example is the development of the boundary layer at the base of the flow when a plug flow (with a slip condition at the base) encounters a rough base, in the limit where the momentum boundary layer thickness is small compared with the flow height. Analytical solutions can be found only when the stream-wise velocity far from the surface varies as x(F), where x is the stream-wise distance from the start of the rough base and F is an exponent. The boundary layer thickness increases as (l(2)x)(1/3) for all values of F, where the length scale l = root 2B/(rho) over bar. The analysis reveals important differences between granular flows and the flows of Newtonian fluids. The Reynolds number (ratio of inertial and viscous terms) turns out to depend only on the layer height and Bagnold coefficients, and is independent of the flow velocity, because both the inertial terms in the conservation equations and the divergence of the stress depend on the square of the velocity/velocity gradients. The compressibility number (ratio of the variation in volume fraction and mean volume fraction) is independent of the flow velocity and layer height, and depends only on the volume fraction and Bagnold coefficients.
Resumo:
The density distribution of inhomogeneous dense deuterium-tritium plasmas in laser fusion is revealed by the energy loss of fast protons going through the plasma. In our simulation of a plasma density diagnostics, the fast protons used for the diagnostics may be generated in the laser-plasma interaction. Dividing a two-dimensional area into grids and knowing the initial and final energies of the protons, we can obtain a large linear and ill-posed equation set. for the densities of all grids, which is solved with the Tikhonov regularization method. We find that the accuracy of the set plan with four proton sources is better than those of the set plans with less than four proton sources. Also we have done the density reconstruction especially. for four proton sources with and without assuming circularly symmetrical density distribution, and find that the accuracy is better for the reconstruction assuming circular symmetry. The error is about 9% when no noise is added to the final energy for the reconstruction of four proton sources assuming circular symmetry. The accuracies for different random noises to final proton energies with four proton sources are also calculated.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.
Resumo:
We present a method for producing dense Active Appearance Models (AAMs), suitable for video-realistic synthesis. To this end we estimate a joint alignment of all training images using a set of pairwise registrations and ensure that these pairwise registrations are only calculated between similar images. This is achieved by defining a graph on the image set whose edge weights correspond to registration errors and computing a bounded diameter minimum spanning tree (BDMST). Dense optical flow is used to compute pairwise registration and we introduce a flow refinement method to align small scale texture. Once registration between training images has been established we propose a method to add vertices to the AAM in a way that minimises error between the observed flow fields and a flow field interpolated between the AAM mesh points. We demonstrate a significant improvement in model compactness using the proposed method and show it dealing with cases that are problematic for current state-of-the-art approaches.
Resumo:
The linear and nonlinear properties of low-frequency electrostatic excitations of charged dust particles (or defects) in a dense collisionless, unmagnetized Thomas-Fermi plasma are investigated. A fully ionized three-component model plasma consisting of electrons, ions, and negatively charged massive dust grains is considered. Electrons and ions are assumed to be in a degenerate quantum state, obeying the Thomas-Fermi density distribution, whereas the inertial dust component is described by a set of classical fluid equations. Considering large-amplitude stationary profile travelling-waves in a moving reference frame, the fluid evolution equations are reduced to a pseudo-energy-balance equation, involving a Sagdeev-type potential function. The analysis describes the dynamics of supersonic dust-acoustic solitary waves in Thomas-Fermi plasmas, and provides exact predictions for their dynamical characteristics, whose dependence on relevant parameters (namely, the ion-to-electron Fermi temperature ratio, and the dust concentration) is investigated. An alternative route is also adopted, by assuming weakly varying small-amplitude disturbances off equilibrium, and then adopting a multiscale perturbation technique to derive a Korteweg–de Vries equation for the electrostatic potential, and finally solving in terms for electric potential pulses (electrostatic solitons). A critical comparison between the two methods reveals that they agree exactly in the small-amplitude, weakly superacoustic limit. The dust concentration (Havnes) parameter h = Zd0nd0/ne0 affects the propagation characteristics by modifying the phase speed, as well as the electron/ion Fermi temperatures. Our results aim at elucidating the characteristics of electrostatic excitations in dust-contaminated dense plasmas, e.g., in metallic electronic devices, and also arguably in supernova environments, where charged dust defects may occur in the quantum plasma regime.