878 resultados para Topologies on an arbitrary set
Resumo:
Rapid scan electron paramagnetic resonance (EPR) was developed in the Eaton laboratory at the University of Denver. Applications of rapid scan to wider spectra, such as for immobilized nitroxides, spin-labeled proteins, irradiated tooth and fingernail samples were demonstrated in this dissertation. The scan width has been increased from 55 G to 160 G. The signal to noise (S/N) improvement for slowly tumbling spin-labeled protein samples that is provided by rapid scan EPR will be highly advantageous for biophysical studies. With substantial improvement in S/N by rapid scan, the dose estimation for irradiated tooth enamels became more reliable than the traditional continuous wave (CW) EPR. An alternate approach of rapid scan, called field-stepped direct detection EPR, was developed to reconstruct wider EPR signals. A Mn2+ containing crystal was measured by field-stepped direct detection EPR, which had a spectrum more than 6000 G wide. Since the field-stepped direct detection extends the advantages of rapid scan to much wider scan ranges, this methodology has a great potential to replace the traditional CW EPR. With recent advances in digital electronics, a digital rapid scan spectrometer was built based on an arbitrary waveform generator (AWG), which can excite spins and detect EPR signals with a fully digital system. A near-baseband detection method was used to acquire the in-phase and quadrature signals in one physical channel. The signal was analyzed digitally to generate ideally orthogonal quadrature signals. A multiharmonic algorithm was developed that employed harmonics of the modulation frequencies acquired in the spectrometer transient mode. It was applied for signals with complicated lineshapes, and can simplify the selection of modulation amplitude. A digital saturation recovery system based on an AWG was built at X-band (9.6 GHz). To demonstrate performance of the system, the spin-lattice relaxation time of a fused quartz rod was measured at room temperature with fully digital excitation and detection.
Resumo:
For each quantum superalgebra U-q[osp(m parallel to n)] with m > 2, an infinite family of Casimir invariants is constructed. This is achieved by using an explicit form for the Lax operator. The eigenvalue of each Casimir invariant on an arbitrary irreducible highest weight module is also calculated. (c) 2005 American Institute of Physics.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
The accurate in silico identification of T-cell epitopes is a critical step in the development of peptide-based vaccines, reagents, and diagnostics. It has a direct impact on the success of subsequent experimental work. Epitopes arise as a consequence of complex proteolytic processing within the cell. Prior to being recognized by T cells, an epitope is presented on the cell surface as a complex with a major histocompatibility complex (MHC) protein. A prerequisite therefore for T-cell recognition is that an epitope is also a good MHC binder. Thus, T-cell epitope prediction overlaps strongly with the prediction of MHC binding. In the present study, we compare discriminant analysis and multiple linear regression as algorithmic engines for the definition of quantitative matrices for binding affinity prediction. We apply these methods to peptides which bind the well-studied human MHC allele HLA-A*0201. A matrix which results from combining results of the two methods proved powerfully predictive under cross-validation. The new matrix was also tested on an external set of 160 binders to HLA-A*0201; it was able to recognize 135 (84%) of them.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
An efficient one-step digit-set-restricted modified signed-digit (MSD) adder based on symbolic substitution is presented. In this technique, carry propagation is avoided by introducing reference digits to restrict the intermediate carry and sum digits to {1,0} and {0,1}, respectively. The proposed technique requires significantly fewer minterms and simplifies system complexity compared to the reported one-step MSD addition techniques. An incoherent correlator based on an optoelectronic shared content-addressable memory processor is suggested to perform the addition operation. In this technique, only one set of minterms needs to be stored, independent of the operand length. (C) 2002 society or Photo-Optical Instrumentation Engineers.
Resumo:
It is believed that every fuzzy generalization should be formulated in such a way that it contain the ordinary set theoretic notion as a special case. Therefore the definition of fuzzy topology in the line of C.L.CHANG E9] with an arbitrary complete and distributive lattice as the membership set is taken. Almost all the results proved and presented in this thesis can, in a sense, be called generalizations of corresponding results in ordinary set theory and set topology. However the tools and the methods have to be in many of the cases, new. Here an attempt is made to solve the problem of complementation in the lattice of fuzzy topologies on a set. It is proved that in general, the lattice of fuzzy topologies is not complemented. Complements of some fuzzy topologies are found out. It is observed that (L,X) is not uniquely complemented. However, a complete analysis of the problem of complementation in the lattice of fuzzy topologies is yet to be found out
Resumo:
To date, most applications of algebraic analysis and attacks on stream ciphers are on those based on lin- ear feedback shift registers (LFSRs). In this paper, we extend algebraic analysis to non-LFSR based stream ciphers. Specifically, we perform an algebraic analysis on the RC4 family of stream ciphers, an example of stream ciphers based on dynamic tables, and inves- tigate its implications to potential algebraic attacks on the cipher. This is, to our knowledge, the first pa- per that evaluates the security of RC4 against alge- braic attacks through providing a full set of equations that describe the complex word manipulations in the system. For an arbitrary word size, we derive alge- braic representations for the three main operations used in RC4, namely state extraction, word addition and state permutation. Equations relating the inter- nal states and keystream of RC4 are then obtained from each component of the cipher based on these al- gebraic representations, and analysed in terms of their contributions to the security of RC4 against algebraic attacks. Interestingly, it is shown that each of the three main operations contained in the components has its own unique algebraic properties, and when their respective equations are combined, the resulting system becomes infeasible to solve. This results in a high level of security being achieved by RC4 against algebraic attacks. On the other hand, the removal of an operation from the cipher could compromise this security. Experiments on reduced versions of RC4 have been performed, which confirms the validity of our algebraic analysis and the conclusion that the full RC4 stream cipher seems to be immune to algebraic attacks at present.
Resumo:
An efficient numerical method to compute nonlinear solutions for two-dimensional steady free-surface flow over an arbitrary channel bottom topography is presented. The approach is based on a boundary integral equation technique which is similar to that of Vanden-Broeck's (1996, J. Fluid Mech., 330, 339-347). The typical approach for this problem is to prescribe the shape of the channel bottom topography, with the free-surface being provided as part of the solution. Here we take an inverse approach and prescribe the shape of the free-surface a priori while solving for the corresponding bottom topography. We show how this inverse approach is particularly useful when studying topographies that give rise to wave-free solutions, allowing us to easily classify eleven basic flow types. Finally, the inverse approach is also adapted to calculate a distribution of pressure on the free-surface, given the free-surface shape itself.
Resumo:
Fluidised bed-heat pump drying technology offers distinctive advantages over the existing drying technology employed in the Australian food industry. However, as is the case with many other examples of innovations that have had clear relative advantages, the rates of adoption and diffusion of this technology have been very slow. "Why does this happen?" is the theme of this research study that has been undertaken with an objective to analyse a range of issues related to the market acceptance of technological innovations. The research methodology included the development of an integrated conceptual model based on an extensive review of literature in the areas of innovation diffusion, technology transfer and industrial marketing. Three major determinants associated with the market acceptance of innovations were identified as the characteristics of the innovation, adopter information processing capability and the influence of the innovation supplier on the adoption process. This was followed by a study involving more than 30 small and medium enterprises identified as potential adopters of fluidised bed-heat pump drying technology in the Australian food industry. The findings revealed that judgment was the key evaluation strategy employed by potential adopters in the particular industry sector. Further, it was evidenced that the innovations were evaluated against a predetermined criteria covering a range of aspects with emphasis on a selected set of attributes of the innovation. Implication of these findings on the commercialisation of fluidised bed-heat pump drying technology was established, and a series of recommendations was made to the innovation supplier (DPI/FT) enabling it to develop an effective commercialisation strategy.
Resumo:
In many instances we find it advantageous to display a quantum optical density matrix as a generalized statistical ensemble of coherent wave fields. The weight functions involved in these constructions turn out to belong to a family of distributions, not always smooth functions. In this paper we investigate this question anew and show how it is related to the problem of expanding an arbitrary state in terms of an overcomplete subfamily of the overcomplete set of coherent states. This provides a relatively transparent derivation of the optical equivalence theorem. An interesting by-product is the discovery of a new class of discrete diagonal representations.
Resumo:
Using a solid-state electrochemical cell incorporating yttria-doped thoria (YDT) as the electrolyte and a mixture of (Mn + MnO) as the reference electrode, standard Gibbs free energy of formation of beta-Ta2O5 has been determined as a function of temperature in the range (1000 to 1300) K. The solid-state electrochemical cell used can be represented as (-)Pt,Ta +Ta2O5//(Y2O3)ThO2//Mn + MnO, Pt(+) Combining the reversible e.m.f. of the cell with recent data on the free energy of formation of MnO, standard Gibbs free energy of formation of Ta2O5 from Ta metal and diatomic oxygen gas (O-2) in the temperature range (1000 to 1300) K is obtained: Delta fG degrees +/- 0.35/(kJ.mol(-1)) = -2004.376 + 0.40445(T/K). Because of the significant solid solubility of oxygen in tantalum, a small correction for the activity of Ta in the metal phase in equilibrium with Ta2O5 is applied. An analysis of the results obtained in this study and other free energy data reported in the literature by the "third law" method suggests the need for refining data for Ta2O5 reported in thermodynamic compilations. Used in the analysis is a revised value for standard entropy of Ta2O5 based on more recent low-temperature heat capacity measurements. An improved set of thermodynamic properties of ditantalum pentoxide (Ta2O5) are presented in the temperature range (298.15 to 2200) K. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This paper is concerned with a study of some of the properties of locally product and almost locally product structures on a differentiable manifold X n of class C k . Every locally product space has certain almost locally product structures which transform the local tangent space to X n at an arbitrary point P in a set fashion: this is studied in Theorem (2.2). Theorem (2.3) considers the nature of transformations that exist between two co-ordinate systems at a point whenever an almost locally product structure has the same local representation in each of these co-ordinate systems. A necessary and sufficient condition for X n to be a locally product manifold is obtained in terms of the pseudo-group of co-ordinate transformations on X n and the subpseudo-groups [cf., Theoren (2.1)]. Section 3 is entirely devoted to the study of integrable almost locally product structures.
Resumo:
Bose-C-Hocquenghem (BCH) atdes with symbols from an arbitrary fhite integer ring are derived in terms of their generator polynomials. The derivation is based on the factohation of x to the power (n) - 1 over the unit ring of an appropriate extension of the fiite integer ring. lke eomtruetion is thus shown to be similar to that for BCH codes over fink flelda.
Resumo:
his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.