268 resultados para 3D point clouds
Resumo:
The unsteady laminar incompressible boundary-layer flow near the three-dimensional asymmetric stagnation point has been studied under the assumptions that the free-stream velocity, wall temperature, and surface mass transfer vary arbitrarily with time. The partial differential equations governing the flow have been solved numerically using an implicit finite-difference scheme. It is found that in contrast with the symmetric flow, the maximum heat transfer occurs away from the stagnation point due to the decrease in the boundary-layer thickness. The effect of the variation of the wall temperature with time on heat transfer is strong. The skin friction and heat transfer due to asymmetric flow only are comparatively less affected by the mass transfer as compared to those of symmetric flow.
Resumo:
A microbeam testing geometry is designed to study the variation in fracture toughness across a compositionally graded NiAl coating on a superalloy substrate. A bi-material analytical model of fracture is used to evaluate toughness by deconvoluting load-displacement data generated in a three-point bending test. It is shown that the surface layers of a diffusion bond coat can be much more brittle than the interior despite the fact that elastic modulus and hardness do not display significant variations. Such a gradient in toughness allows stable crack propagation in a test that would normally lead to unstable fracture in a homogeneous, brittle material. As the crack approaches the interface, plasticity due to the presence of Ni3Al leads to gross bending and crack bifurcation.
Resumo:
The steady MHD mixed convection flow of a viscoelastic fluid in the vicinity of two-dimensional stagnation point with magnetic field has been investigated under the assumption that the fluid obeys the upper-convected Maxwell (UCM) model. Boundary layer theory is used to simplify the equations of motion. induced magnetic field and energy which results in three coupled non-linear ordinary differential equations which are well-posed. These equations have been solved by using finite difference method. The results indicate the reduction in the surface velocity gradient, surface heat transfer and displacement thickness with the increase in the elasticity number. These trends are opposite to those reported in the literature for a second-grade fluid. The surface velocity gradient and heat transfer are enhanced by the magnetic and buoyancy parameters. The surface heat transfer increases with the Prandtl number, but the surface velocity gradient decreases.
Resumo:
We set up Wigner distributions for N-state quantum systems following a Dirac-inspired approach. In contrast to much of the work in this study, requiring a 2N x 2N phase space, particularly when N is even, our approach is uniformly based on an N x N phase-space grid and thereby avoids the necessity of having to invoke a `quadrupled' phase space and hence the attendant redundance. Both N odd and even cases are analysed in detail and it is found that there are striking differences between the two. While the N odd case permits full implementation of the marginal property, the even case does so only in a restricted sense. This has the consequence that in the even case one is led to several equally good definitions of the Wigner distributions as opposed to the odd case where the choice turns out to be unique.
Resumo:
We study diagonal estimates for the Bergman kernels of certain model domains in C-2 near boundary points that are of infinite type. To do so, we need a mild structural condition on the defining functions of interest that facilitates optimal upper and lower bounds. This is a mild condition; unlike earlier studies of this sort, we are able to make estimates for non-convex pseudoconvex domains as well. Thisn condition quantifies, in some sense, how flat a domain is at an infinite-type boundary point. In this scheme of quantification, the model domains considered below range-roughly speaking-from being mildly infinite-type'' to very flat at the infinite-type points.
Resumo:
A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.
Resumo:
In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.
Resumo:
Restriction endonucleases (REases) protect bacteria from invading foreign DNAs and are endowed with exquisite sequence specificity. REases have originated from the ancestral proteins and evolved new sequence specificities by genetic recombination, gene duplication, replication slippage, and transpositional events. They are also speculated to have evolved from nonspecific endonucleases, attaining a high degree of sequence specificity through point mutations. We describe here an example of generation of exquisitely site-specific REase from a highly-promiscuous one by a single point mutation.
Resumo:
In this paper, a relative velocity approach is used to analyze the capturability of a geometric guidance law. Point mass models are assumed for both the missile and the target. The speeds of the missile and target are assumed to remain constant throughout the engagement. Lateral acceleration, obtained from the guidance law, is applied to change the path of the missile. The kinematic equations for engagements in the horizontal plane are derived in the relative velocity space. Some analytical results for the capture region are obtained for non-maneuvering and maneuvering targets. For non-maneuvering targets it is enough for the navigation gain to be a constant to intercept the target, while for maneuvering targets a time varying navigation gain is needed for interception. These results are then verified through numerical simulations.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.
Resumo:
The coherent quantum evolution of a one-dimensional many-particle system after slowly sweeping the Hamiltonian through a critical point is studied using a generalized quantum Ising model containing both integrable and nonintegrable regimes. It is known from previous work that universal power laws of the sweep rate appear in such quantities as the mean number of excitations created by the sweep. Several other phenomena are found that are not reflected by such averages: there are two different scaling behaviors of the entanglement entropy and a relaxation that is power law in time rather than exponential. The final state of evolution after the quench is not characterized by any effective temperature, and the Loschmidt echo converges algebraically for long times, with cusplike singularities in the integrable case that are dynamically broadened by nonintegrable perturbations.
Resumo:
Average-delay optimal scheduflng of messages arriving to the transmitter of a point-to-point channel is considered in this paper. We consider a discrete time batch-arrival batch-service queueing model for the communication scheme, with service time that may be a function of batch size. The question of delay optimality is addressed within the semi-Markov decision-theoretic framework. Approximations to the average-delay optimal policy are obtained.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.