977 resultados para Full spatial domain computation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A molecular approach was used to investigate a recently described candidate division of the domain Bacteria, TM7, currently known only from environmental 16S ribosomal DNA sequence data, A number of TM7-specific primers and probes were designed and evaluated. Fluorescence in situ hybridization (FISH) of a laboratory scale bioreactor using two independent TM7-specific probes revealed a conspicuous sheathed-filament morphotype, fortuitously enriched in the reactor. Morphologically, the filament matched the description of the Eikelboom morphotype 0041-0675 widely associated with bulking problems in activated-sludge wastewater treatment systems. Transmission electron microscopy of the bioreactor sludge demonstrated that the sheathed-filament morphotype had a typical gram-positive cell envelope ultrastructure. Therefore, TM7 is only the third bacterial lineage recognized to have gram-positive representatives. TM7-specific FISH analysis of two full-scale wastewater treatment plant sludges, including the one used to seed the laboratory scale reactor, indicated the presence of a number of morphotypes, including sheathed filaments. TM7-specific PCR clone libraries prepared from the two full-scale sludges yielded 23 novel TM7 sequences. Three subdivisions could be defined based on these data and publicly available sequences. Environmental sequence data and TM7-specific FISH analysis indicate that members of the TM7 division are present in a variety of terrestrial, aquatic, and clinical habitats. A highly atypical base substitution (Escherichia coli position 912; C to U) for bacterial 16S rRNAs was present in almost all TM7 sequences, suggesting that TM7 bacteria, like Archaea, may be streptomycin resistant at the ribosome level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GRIP domain is a targeting sequence found in a family of coiled-coil peripheral Golgi proteins. Previously we demonstrated that the GRIP domain of p230/golgin245 is specifically recruited to tubulovesicular structures of the traps-Golgi network (TGN). Here we have characterized two novel Golgi proteins with functional GRIP domains, designated GCC88 and GCC185. GCC88 cDNA encodes a protein of 88 kDa, and GCC185 cDNA encodes a protein of 185 kDa. Both molecules are brefeldin A-sensitive peripheral membrane proteins and are predicted to have extensive coiled-coil regions with the GRIP domain at the C terminus. By immunofluorescence and immunoelectron microscopy GCC88 and GCC185, and the GRIP protein golgin97, are all localized to the TGN of Hela cells. Overexpression of full-length GCC88 leads to the formation of large electron dense structures that extend from the traps-Golgi. These de novo structures contain GCC88 and co-stain for the TGN markers syntaxin 6 and TGN38 but not for alpha2,6-sialyltransferase, beta-COP, or cis-Golgi GM130. The formation of these abnormal structures requires the N-terminal domain of GCC88. TGN38, which recycles between the TGN and plasma membrane, was transported into and out of the GCC88 decorated structures. These data introduce two new GRIP domain proteins and implicate a role for GCC88 in the organization of a specific TGN subcompartment involved with membrane transport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To estimate the spatial intensity of urban violence events using wavelet-based methods and emergency room data. METHODS: Information on victims attended at the emergency room of a public hospital in the city of São Paulo, Southeastern Brazil, from January 1, 2002 to January 11, 2003 were obtained from hospital records. The spatial distribution of 3,540 events was recorded and a uniform random procedure was used to allocate records with incomplete addresses. Point processes and wavelet analysis technique were used to estimate the spatial intensity, defined as the expected number of events by unit area. RESULTS: Of all georeferenced points, 59% were accidents and 40% were assaults. There is a non-homogeneous spatial distribution of the events with high concentration in two districts and three large avenues in the southern area of the city of São Paulo. CONCLUSIONS: Hospital records combined with methodological tools to estimate intensity of events are useful to study urban violence. The wavelet analysis is useful in the computation of the expected number of events and their respective confidence bands for any sub-region and, consequently, in the specification of risk estimates that could be used in decision-making processes for public policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take individual sensor readings, however, in many cases, it is relevant to compute aggregated quantities of these readings. In fact, the minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. In this context, we propose an algorithm for computing the min or max of sensor readings in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do grau de Doutor em Matemática na especialidade de Equações Diferenciais, pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In-Band Full-DupleX (IB-FDX) is defined as the ability for nodes to transmit and receive signals simultaneously on the same channel. Conventional digital wireless networks do not implement it, since a node’s own transmission signal causes interference to the signal it is trying to receive. However, recent studies attempt to overcome this obstacle, since it can potentially double the spectral efficiency of current wireless networks. Different mechanisms exist today that are able to reduce a significant part of the Self- Interference (SI), although specially tuned Medium Access Control (MAC) protocols are required to optimize its use. One of IB-FDX’s biggest problems is that the nodes’ interference range is extended, meaning the unusable space for other transmissions and receptions is broader. This dissertation proposes using MultiPacket Reception (MPR) to address this issue and adapts an already existing Single-Carrier with Frequency-Domain Equalization (SC-FDE) receiver to IB-FDX. The performance analysis suggests that MPR and IB-FDX have a strong synergy and are able to achieve higher data rates, when used together. Using analytical models, the optimal transmission patterns and transmission power were identified, which maximize the channel capacity with the minimal energy consumption. This was used to define a new MAC protocol, named Full-duplex Multipacket reception Medium Access Control (FM-MAC). FM-MAC was designed for a single-hop cellular infrastructure, where the Access Point (AP) and the terminals implement both IB-FDX and MPR. It divides the coverage range of the AP into a closer Full-DupleX (FDX) zone and a farther Half-DupleX (HDX) zone and adds a tunable fairness mechanism to avoid terminal starvation. Simulation results show that this protocol provides efficient support for both HDX and FDX terminals, maximizing its capacity when more FDX terminals are used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doctoral Program in Computer Science