823 resultados para Graph-based approach
Resumo:
The problem of designing good Space-Time Block Codes (STBCs) with low maximum-likelihood (ML) decoding complexity has gathered much attention in the literature. All the known low ML decoding complexity techniques utilize the same approach of exploiting either the multigroup decodable or the fast-decodable (conditionally multigroup decodable) structure of a code. We refer to this well known technique of decoding STBCs as Conditional ML (CML) decoding. In [1], we introduced a framework to construct ML decoders for STBCs based on the Generalized Distributive Law (GDL) and the Factor-graph based Sum-Product Algorithm, and showed that for two specific families of STBCs, the Toepltiz codes and the Overlapped Alamouti Codes (OACs), the GDL based ML decoders have strictly less complexity than the CML decoders. In this paper, we introduce a `traceback' step to the GDL decoding algorithm of STBCs, which enables roughly 4 times reduction in the complexity of the GDL decoders proposed in [1]. Utilizing this complexity reduction from `traceback', we then show that for any STBC (not just the Toeplitz and Overlapped Alamouti Codes), the GDL decoding complexity is strictly less than the CML decoding complexity. For instance, for any STBC obtained from Cyclic Division Algebras that is not multigroup or conditionally multigroup decodable, the GDL decoder provides approximately 12 times reduction in complexity compared to the CML decoder. Similarly, for the Golden code, which is conditionally multigroup decodable, the GDL decoder is only about half as complex as the CML decoder.
Resumo:
Sustainability has emerged as one of the important planning concepts from its beginnings in economics and ecological thinking, and has widely been applied to assessing urban development. Different methods, techniques and instruments for urban sustainability assessment that help determine how cities can become more sustainable have emerged over a period of time. Among these, indicator-based approaches contribute to building of sustainable self-regulated systems that integrate development and environment protection. Hence, these provide a solid foundation for decision-making at all levels and are being increasingly used. The present paper builds on the background of the available literature and suggests the need for benchmarking indicator-based approach in a given urban area and incorporating various local issues, thus enhancing the long-term sustainability of cities which can be developed by introducing sustainability indicators into the urban planning process. (C) 2013 International Energy Initiative. Published by Elsevier Inc. All rights reserved.
Resumo:
Rapid and facile synthesis of similar to 7 nm and similar to 100-400 nm nano-structures of anatase titania is achieved by exploiting the chemical nature of solvents through a microwave based approach. After using these nanostructures as a photoanode in dye-sensitized solar cells, a modest yet appreciable efficiency of 6.5% was achieved under the illumination of AM 1.5 G one sun (100 mW cm(-2)).
Resumo:
Classification of pharmacologic activity of a chemical compound is an essential step in any drug discovery process. We develop two new atom-centered fragment descriptors (vertex indices) - one based solely on topological considerations without discriminating atomor bond types, and another based on topological and electronic features. We also assess their usefulness by devising a method to rank and classify molecules with regard to their antibacterial activity. Classification performances of our method are found to be superior compared to two previous studies on large heterogeneous data sets for hit finding and hit-to-lead studies even though we use much fewer parameters. It is found that for hit finding studies topological features (simple graph) alone provide significant discriminating power, and for hit-to-lead process small but consistent improvement can be made by additionally including electronic features (colored graph). Our approach is simple, interpretable, and suitable for design of molecules as we do not use any physicochemical properties. The singular use of vertex index as descriptor, novel range based feature extraction, and rigorous statistical validation are the key elements of this study.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The proposed method was compared with the recently established data-resolution matrix-based approach for optimal choice of independent measurements and shown, using simulated and experimental gelatin phantom data sets, to be superior as it does not require an optimal regularization parameter for providing the same information. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
Large variations in human actions lead to major challenges in computer vision research. Several algorithms are designed to solve the challenges. Algorithms that stand apart, help in solving the challenge in addition to performing faster and efficient manner. In this paper, we propose a human cognition inspired projection based learning for person-independent human action recognition in the H.264/AVC compressed domain and demonstrate a PBL-McRBEN based approach to help take the machine learning algorithms to the next level. Here, we use gradient image based feature extraction process where the motion vectors and quantization parameters are extracted and these are studied temporally to form several Group of Pictures (GoP). The GoP is then considered individually for two different bench mark data sets and the results are classified using person independent human action recognition. The functional relationship is studied using Projection Based Learning algorithm of the Meta-cognitive Radial Basis Function Network (PBL-McRBFN) which has a cognitive and meta-cognitive component. The cognitive component is a radial basis function network while the Meta-Cognitive Component(MCC) employs self regulation. The McC emulates human cognition like learning to achieve better performance. Performance of the proposed approach can handle sparse information in compressed video domain and provides more accuracy than other pixel domain counterparts. Performance of the feature extraction process achieved more than 90% accuracy using the PTIL-McRBFN which catalyzes the speed of the proposed high speed action recognition algorithm. We have conducted twenty random trials to find the performance in GoP. The results are also compared with other well known classifiers in machine learning literature.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.
Resumo:
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. (C) 2015 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Clock synchronization in a wireless sensor network (WSN) is quite essential as it provides a consistent and a coherent time frame for all the nodes across the network. Typically, clock synchronization is achieved by message passing using a contention-based scheme for media access, like carrier sense multiple access (CSMA). The nodes try to synchronize with each other, by sending synchronization request messages. If many nodes try to send messages simultaneously, contention-based schemes cannot efficiently avoid collisions. In such a situation, there are chances of collisions, and hence, message losses, which, in turn, affects the convergence of the synchronization algorithms. However, the number of collisions can be reduced with a frame based approach like time division multiple access (TDMA) for message passing. In this paper, we propose a design to utilize TDMA-based media access and control (MAC) protocol for the performance improvement of clock synchronization protocols. The basic idea is to use TDMA-based transmissions when the degree of synchronization improves among the sensor nodes during the execution of the clock synchronization algorithm. The design significantly reduces the collisions among the synchronization protocol messages. We have simulated the proposed protocol in Castalia network simulator. The simulation results show that the proposed protocol significantly reduces the time required for synchronization and also improves the accuracy of the synchronization algorithm.
Resumo:
In this paper an explicit guidance law for the powered descent phase of the soft lunar landing is presented. The descent trajectory, expressed in polynomial form is fixed based on the boundary conditions imposed by the precise soft landing mission. Adapting an inverse model based approach, the guidance command is computed from the known spacecraft trajectory. The guidance formulation ensures the vertical orientation of the spacecraft during touchdown. Also a closed form relation for the final flight time is proposed. The final time is expressed as a function of initial position and velocity of the spacecraft ( at the start of descent) and also depends on the desired landing site. To ensure the fuel minimum descent the proposed explicit method is extended to optimal guidance formulation. The effectiveness of the proposed guidance laws are demonstrated with simulation results.
Resumo:
Constant-volts-per-hertz induction motor drives and vector-controlled induction motor drives utilize pulsewidth modulation (PWM) to control the voltage applied on the motor. The method of PWM influences the pulsations in the torque developed by the motor. A space-vector-based approach to PWM facilitates special switching sequences involving the division of active state time. This paper proposes a space-vector-based hybrid PWM technique, which is a combination of the conventional and special switching sequences. The proposed hybrid PWM technique results in a lower peak-to-peak torque ripple than conventional space vector PWM(CSVPWM) at high speeds of an induction motor drive. Furthermore, the magnitude of the dominant torque harmonic due to the proposed hybrid PWM is significantly lower than that due to CSVPWM at high speeds of the drive. Experimental results from a 3.75-kW sensorless vector-controlled induction motor drive under various load conditions are presented to support analytical and simulation results.
Resumo:
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
One of the key systems of a Wave Energy Converter for extraction of wave energy is the Power Take-Off (PTO) device. This device transforms the mechanical energy of a moving body into electrical energy. This paper describes the model of an innovative PTO based on independently activated double-acting hydraulic cylinders array. The model has been developed using a simulation tool, based on a port-based approach to model hydraulics systems. The components and subsystems used in the model have been parameterized as real components and their values experimentally obtained from an existing prototype. In fact, the model takes into account most of the hydraulic losses of each component. The simulations show the flexibility to apply different restraining torques to the input movement depending on the geometrical configuration and the hydraulic cylinders on duty, easily modified by a control law. The combination of these two actions allows suitable flexibility to adapt the device to different sea states whilst optimizing the energy extraction. The model has been validated using a real test bench showing good correlations between simulation and experimental tests.