34 resultados para tag data structure

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe an RFID tag reading system for reading one or more RFID Tags, the system comprising an RF transmitter and an RF receiver, a plurality of transmit/receive antennas coupled to said RF transmitter and to said RF receiver, to provide spatial transmit/receive signal diversity, and a tag signal decoder coupled to at least said RF receiver, wherein said system is configured to combine received RF signals from said antennas to provide a combined received RF signal, wherein said RF receiver has said combined received RF signal as an input; wherein said antennas are spaced apart from one another sufficiently for one said antenna not to be within the near field of another said antenna, wherein said system is configured to perform a tag inventory cycle comprising a plurality of tag read rounds to read said tags, a said tag read round comprising transmission of one or more RF tag interrogation signals simultaneously from said plurality of antennas and receiving a signal from one or more of said tags, a said tag read round having a set of time slots during which a said tag is able to transmit tag data including a tag ID for reception by said antenna, and wherein said system is configured to perform, during a said tag inventory cycle, one or both of: a change in a frequency of said tag interrogation signals transmitted simultaneously from said plurality of antennas, and a change in a relative phase of a said RF tag interrogation signals transmitted from one of said antennas with respect to another of said antennas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The detailed understanding of the electronic properties of carbon-based materials requires the determination of their electronic structure and more precisely the calculation of their joint density of states (JDOS) and dielectric constant. Low electron energy loss spectroscopy (EELS) provides a continuous spectrum which represents all the excitations of the electrons within the material with energies ranging between zero and about 100 eV. Therefore, EELS is potentially more powerful than conventional optical spectroscopy which has an intrinsic upper information limit of about 6 eV due to absorption of light from the optical components of the system or the ambient. However, when analysing EELS data, the extraction of the single scattered data needed for Kramers Kronig calculations is subject to the deconvolution of the zero loss peak from the raw data. This procedure is particularly critical when attempting to study the near-bandgap region of materials with a bandgap below 1.5 eV. In this paper, we have calculated the electronic properties of three widely studied carbon materials; namely amorphous carbon (a-C), tetrahedral amorphous carbon (ta-C) and C60 fullerite crystal. The JDOS curve starts from zero for energy values below the bandgap and then starts to rise with a rate depending on whether the material has a direct or an indirect bandgap. Extrapolating a fit to the data immediately above the bandgap in the stronger energy loss region was used to get an accurate value for the bandgap energy and to determine whether the bandgap is direct or indirect in character. Particular problems relating to the extraction of the single scattered data for these materials are also addressed. The ta-C and C60 fullerite materials are found to be direct bandgap-like semiconductors having a bandgaps of 2.63 and 1.59eV, respectively. On the other hand, the electronic structure of a-C was unobtainable because it had such a small bandgap that most of the information is contained in the first 1.2 eV of the spectrum, which is a region removed during the zero loss deconvolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning is often understood as an organism's gradual acquisition of the association between a given sensory stimulus and the correct motor response. Mathematically, this corresponds to regressing a mapping between the set of observations and the set of actions. Recently, however, it has been shown both in cognitive and motor neuroscience that humans are not only able to learn particular stimulus-response mappings, but are also able to extract abstract structural invariants that facilitate generalization to novel tasks. Here we show how such structure learning can enhance facilitation in a sensorimotor association task performed by human subjects. Using regression and reinforcement learning models we show that the observed facilitation cannot be explained by these basic models of learning stimulus-response associations. We show, however, that the observed data can be explained by a hierarchical Bayesian model that performs structure learning. In line with previous results from cognitive tasks, this suggests that hierarchical Bayesian inference might provide a common framework to explain both the learning of specific stimulus-response associations and the learning of abstract structures that are shared by different task environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MOTIVATION: We present a method for directly inferring transcriptional modules (TMs) by integrating gene expression and transcription factor binding (ChIP-chip) data. Our model extends a hierarchical Dirichlet process mixture model to allow data fusion on a gene-by-gene basis. This encodes the intuition that co-expression and co-regulation are not necessarily equivalent and hence we do not expect all genes to group similarly in both datasets. In particular, it allows us to identify the subset of genes that share the same structure of transcriptional modules in both datasets. RESULTS: We find that by working on a gene-by-gene basis, our model is able to extract clusters with greater functional coherence than existing methods. By combining gene expression and transcription factor binding (ChIP-chip) data in this way, we are better able to determine the groups of genes that are most likely to represent underlying TMs. AVAILABILITY: If interested in the code for the work presented in this article, please contact the authors. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep belief networks are a powerful way to model complex probability distributions. However, learning the structure of a belief network, particularly one with hidden units, is difficult. The Indian buffet process has been used as a nonparametric Bayesian prior on the directed structure of a belief network with a single infinitely wide hidden layer. In this paper, we introduce the cascading Indian buffet process (CIBP), which provides a nonparametric prior on the structure of a layered, directed belief network that is unbounded in both depth and width, yet allows tractable inference. We use the CIBP prior with the nonlinear Gaussian belief network so each unit can additionally vary its behavior between discrete and continuous representations. We provide Markov chain Monte Carlo algorithms for inference in these belief networks and explore the structures learned on several image data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many data are naturally modeled by an unobserved hierarchical structure. In this paper we propose a flexible nonparametric prior over unknown data hierarchies. The approach uses nested stick-breaking processes to allow for trees of unbounded width and depth, where data can live at any node and are infinitely exchangeable. One can view our model as providing infinite mixtures where the components have a dependency structure corresponding to an evolutionary diffusion down a tree. By using a stick-breaking approach, we can apply Markov chain Monte Carlo methods based on slice sampling to perform Bayesian inference and simulate from the posterior distribution on trees. We apply our method to hierarchical clustering of images and topic modeling of text data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Engineering companies face many challenges today such as increased competition, higher expectations from consumers and decreasing product lifecycle times. This means that product development times must be reduced to meet these challenges. Concurrent engineering, reuse of engineering knowledge and the use of advanced methods and tools are among the ways of reducing product development times. Concurrent engineering is crucial in making sure that the products are designed with all issues considered simultaneously. The reuse of engineering knowledge allows existing solutions to be reused. It can also help to avoid the mistakes made in previous designs. Computer-based tools are used to store information, automate tasks, distribute work, perform simulation and so forth. This research concerns the evaluation of tools that can be used to support the design process. These tools are evaluated in terms of the capture of information generated during the design process. This information is vital to allow the reuse of knowledge. Present CAD systems store only information on the final definition of the product such as geometry, materials and manufacturing processes. Product Data Management (PDM) systems can manage all this CAD information along with other product related information. The research includes the evaluation of two PDM systems, Windchill and Metaphase, using the design of a single-handed water tap as a case study. The two PDMs were then compared to PROSUS/DDM. PROSUS is the Process-Based Support System proposed by [Blessing 94] using the same case study. The Design Data Model is the product data model that includes PROSUS. The results look promising. PROSUS/DDM is able to capture most design information and structure and present it logically. The design process and product information is related and stored within the DDM structure. The PDMs can capture most design information, but information from early stages of design is stored only as unstructured documentation. Some problems were found with PROSUS/DDM. A proposal is made that may make it possible to resolve these problems, but this will require further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kolmogorov's two-thirds, ((Δv) 2) ∼ e 2/ 3r 2/ 3, and five-thirds, E ∼ e 2/ 3k -5/ 3, laws are formally equivalent in the limit of vanishing viscosity, v → 0. However, for most Reynolds numbers encountered in laboratory scale experiments, or numerical simulations, it is invariably easier to observe the five-thirds law. By creating artificial fields of isotropic turbulence composed of a random sea of Gaussian eddies whose size and energy distribution can be controlled, we show why this is the case. The energy of eddies of scale, s, is shown to vary as s 2/ 3, in accordance with Kolmogorov's 1941 law, and we vary the range of scales, γ = s max/s min, in any one realisation from γ = 25 to γ = 800. This is equivalent to varying the Reynolds number in an experiment from R λ = 60 to R λ = 600. While there is some evidence of a five-thirds law for g > 50 (R λ > 100), the two-thirds law only starts to become apparent when g approaches 200 (R λ ∼ 240). The reason for this discrepancy is that the second-order structure function is a poor filter, mixing information about energy and enstrophy, and from scales larger and smaller than r. In particular, in the inertial range, ((Δv) 2) takes the form of a mixed power-law, a 1+a 2r 2+a 3r 2/ 3, where a 2r 2 tracks the variation in enstrophy and a 3r 2/ 3 the variation in energy. These findings are shown to be consistent with experimental data where the polution of the r 2/ 3 law by the enstrophy contribution, a 2r 2, is clearly evident. We show that higherorder structure functions (of even order) suffer from a similar deficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainty associated with a rainfall-runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions. © 2011 Copyright IAHS Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Dependency Structure Matrix (DSM) has proved to be a useful tool for system structure elicitation and analysis. However, as with any modelling approach, the insights gained from analysis are limited by the quality and correctness of input information. This paper explores how the quality of data in a DSM can be enhanced by elicitation methods which include comparison of information acquired from different perspectives and levels of abstraction. The approach is based on comparison of dependencies according to their structural importance. It is illustrated through two case studies: creation of a DSM showing the spatial connections between elements in a product, and a DSM capturing information flows in an organisation. We conclude that considering structural criteria can lead to improved data quality in DSM models, although further research is required to fully explore the benefits and limitations of our proposed approach.