41 resultados para Notion of enterprise
Resumo:
The problem of spurious patterns in neural associative memory models is discussed, Some suggestions to solve this problem from the literature are reviewed and their inadequacies are pointed out, A solution based on the notion of neural self-interaction with a suitably chosen magnitude is presented for the Hebb learning rule. For an optimal learning rule based on linear programming, asymmetric dilution of synaptic connections is presented as another solution to the problem of spurious patterns, With varying percentages of asymmetric dilution it is demonstrated numerically that this optimal learning rule leads to near total suppression of spurious patterns. For practical usage of neural associative memory networks a combination of the two solutions with the optimal learning rule is recommended to be the best proposition.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.
Resumo:
We present a method for obtaining conjugate, conjoined shapes and tilings in the context of the design of structures using topology optimization. Optimal material distribution is achieved in topology optimization by setting up a selection field in the design domain to determine the presence/absence of material there. We generalize this approach in this paper by presenting a paradigm in which the material left out by the selection field is also utilised. We obtain conjugate shapes when the region chosen and the region left-out are solutions for two problems, each with a different functionality. On the other hand, if the left-out region is connected to the selected region in some pre-determined fashion for achieving a single functionality, then we get conjoined shapes. The utilization of the left-out material, gives the notion of material economy in both cases. Thus, material wastage is avoided in the practical realization of these designs using many manufacturing techniques. This is in contrast to the wastage of left-out material during manufacture of traditional topology-optimized designs. We illustrate such shapes in the case of stiff structures and compliant mechanisms. When such designs are suitably made on domains of the unit cell of a tiling, this leads to the formation of new tilings which are functionally useful. Such shapes are not only useful for their functionality and economy of material and manufacturing, but also for their aesthetic value.
Resumo:
Many knowledge based systems (KBS) transform a situation information into an appropriate decision using an in built knowledge base. As the knowledge in real world situation is often uncertain, the degree of truth of a proposition provides a measure of uncertainty in the underlying knowledge. This uncertainty can be evaluated by collecting `evidence' about the truth or falsehood of the proposition from multiple sources. In this paper we propose a simple framework for representing uncertainty in using the notion of an evidence space.
Resumo:
We report the shape transformation of ZnO nanorods/nanotubes at temperatures (similar to 700 degrees C) much lower than the bulk melting temperature (1975 degrees C). With increasing annealing temperature, not only does shape transformation take place but the luminescence characteristics of ZnO are also modified. It is proposed that the observed shape transformation is due to surface diffusion, contradicting the previously reported notion of melting and its link to luminescence. Luminescence in the green-to-red region is observed when excited with a blue laser, indicating the conversion of blue to white light.
Resumo:
We associate a sheaf model to a class of Hilbert modules satisfying a natural finiteness condition. It is obtained as the dual to a linear system of Hermitian vector spaces (in the sense of Grothendieck). A refined notion of curvature is derived from this construction leading to a new unitary invariant for the Hilbert module. A division problem with bounds, originating in Douady's privilege, is related to this framework. A series of concrete computations illustrate the abstract concepts of the paper.
Resumo:
Emerging evidence suggests that cancers arise in stem/progenitor cells. Yet, the requirements for transformation of these primitive cells remains poorly understood. In this study, we have exploited the `mammosphere' system that selects for primitive mammary stem/progenitor cells to explore their potential and requirements for transformation. Introduction of Simian Virus 40 Early Region and hTERT into mammosphere-derived cells led to the generation of NBLE, an immortalized mammary epithelial cell line. The NBLEs largely comprised of bi-potent progenitors with long-term self-renewal and multi-lineage differentiation potential. Clonal and karyotype analyses revealed the existence of heterogeneous population within NBLEs with varied proliferation, differentiation and sphere-forming potential. Significantly, injection of NBLEs into immunocompromised mice resulted in the generation of invasive ductal adenocarcinomas. Further, these cells harbored a sub-population of CD44(+)/CD24(-) fraction that alone had sphere- and tumor-initiating potential and resembled the breast cancer stem cell gene signature. Interestingly, prolonged in vitro culturing led to their further enrichment. The NBLE cells also showed increased expression of stemness and epithelial to mesenchymal transition markers, deregulated self-renewal pathways, activated DNA-damage response and cancer-associated chromosomal aberrations-all of which are likely to have contributed to their tumorigenic transformation. Thus, unlike previous in vitro transformation studies that used adherent, more differentiated human mammary epithelial cells our study demonstrates that the mammosphere-derived, less-differentiated cells undergo tumorigenic conversion with only two genetic elements, without requiring oncogenic Ras. Moreover, the striking phenotypic and molecular resemblance of the NBLE-generated tumors with naturally arising breast adenocarcinomas supports the notion of a primitive breast cell as the origin for this subtype of breast cancer. Finally, the NBLEs represent a heterogeneous population of cells with striking plasticity, capable of differentiation, self-renewal and tumorigenicity, thus offering a unique model system to study the molecular mechanisms involved with these processes. Oncogene (2012) 31, 1896-1909; doi:10.1038/onc.2011.378; published online 29 August 2011
Resumo:
We report unusual jamming in driven ordered vortex flow in 2H-NbS2. Reinitiating movement in these jammed vortices with a higher driving force and halting it thereafter once again with a reduction in drive leads to a critical behavior centered around the depinning threshold via divergences in the lifetimes of transient states, validating the predictions of a recent simulation study Reichhardt and Olson Reichhardt, Phys. Rev. Lett. 103, 168301 (2009)] which also pointed out a correspondence between plastic depinning in vortex matter and the notion of random organization proposed Corte et al., Nat. Phys. 4, 420 (2008)] in the context of sheared colloids undergoing diffusive motion.
Resumo:
We propose a Riesz transform approach to the demodulation of digital holograms. The Riesz transform is a higher-dimensional extension of the Hilbert transform and is steerable to a desired orientation. Accurate demodulation of the hologram requires a reliable methodology by which quadrature-phase functions (or simply, quadratures) can be constructed. The Riesz transform, by itself, does not yield quadratures. However, one can start with the Riesz transform and construct the so-called vortex operator by employing the notion of quasi-eigenfunctions, and this approach results in accurate quadratures. The key advantage of using the vortex operator is that it effectively handles nonplanar fringes (interference patterns) and has the ability to compensate for the local orientation. Therefore, this method results in aberration-free holographic imaging even in the case when the wavefronts are not planar. We calibrate the method by estimating the orientation from a reference hologram, measured with an empty field of view. Demodulation results on synthesized planar as well as nonplanar fringe patterns show that the accuracy of demodulation is high. We also perform validation on real experimental measurements of Caenorhabditis elegans acquired with a digital holographic microscope. (c) 2012 Optical Society of America
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Time series classification deals with the problem of classification of data that is multivariate in nature. This means that one or more of the attributes is in the form of a sequence. The notion of similarity or distance, used in time series data, is significant and affects the accuracy, time, and space complexity of the classification algorithm. There exist numerous similarity measures for time series data, but each of them has its own disadvantages. Instead of relying upon a single similarity measure, our aim is to find the near optimal solution to the classification problem by combining different similarity measures. In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and present promising results.
Resumo:
This work derives inner and outer bounds on the generalized degrees of freedom (GDOF) of the K-user symmetric MIMO Gaussian interference channel. For the inner bound, an achievable GDOF is derived by employing a combination of treating interference as noise, zero-forcing at the receivers, interference alignment (IA), and extending the Han-Kobayashi (HK) scheme to K users, depending on the number of antennas and the INR/SNR level. An outer bound on the GDOF is derived, using a combination of the notion of cooperation and providing side information to the receivers. Several interesting conclusions are drawn from the bounds. For example, in terms of the achievable GDOF in the weak interference regime, when the number of transmit antennas (M) is equal to the number of receive antennas (N), treating interference as noise performs the same as the HK scheme and is GDOF optimal. For K >; N/M+1, a combination of the HK and IA schemes performs the best among the schemes considered. However, for N/M <; K ≤ N/M+1, the HK scheme is found to be GDOF optimal.
Resumo:
The notion of the 1-D analytic signal is well understood and has found many applications. At the heart of the analytic signal concept is the Hilbert transform. The problem in extending the concept of analytic signal to higher dimensions is that there is no unique multidimensional definition of the Hilbert transform. Also, the notion of analyticity is not so well under stood in higher dimensions. Of the several 2-D extensions of the Hilbert transform, the spiral-phase quadrature transform or the Riesz transform seems to be the natural extension and has attracted a lot of attention mainly due to its isotropic properties. From the Riesz transform, Larkin et al. constructed a vortex operator, which approximates the quadratures based on asymptotic stationary-phase analysis. In this paper, we show an alternative proof for the quadrature approximation property by invoking the quasi-eigenfunction property of linear, shift-invariant systems. We show that the vortex operator comes up as a natural consequence of applying this property. We also characterize the quadrature approximation error in terms of its energy as well as the peak spatial-domain error. Such results are available for 1-D signals, but their counter part for 2-D signals have not been provided. We also provide simulation results to supplement the analytical calculations.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.