250 resultados para Prediction techniques
Resumo:
Unending quest for performance improvement coupled with the advancements in integrated circuit technology have led to the development of new architectural paradigm. Speculative multithreaded architecture (SpMT) philosophy relies on aggressive speculative execution for improved performance. However, aggressive speculative execution comes with a mixed flavor of improving performance, when successful, and adversely affecting the energy consumption (and performance) because of useless computation in the event of mis-speculation. Dynamic instruction criticality information can be usefully applied to control and guide such an aggressive speculative execution. In this paper, we present a model of micro-execution for SpMT architecture that we have developed to determine the dynamic instruction criticality. We have also developed two novel techniques utilizing the criticality information namely delaying the non-critical loads and the criticality based thread-prediction for reducing useless computations and energy consumption. Experimental results showing break-up of critical instructions and effectiveness of proposed techniques in reducing energy consumption are presented in the context of multiscalar processor that implements SpMT architecture. Our experiments show 17.7% and 11.6% reduction in dynamic energy for criticality based thread prediction and criticality based delayed load scheme respectively while the improvement in dynamic energy delay product is 13.9% and 5.5%, respectively. (c) 2012 Published by Elsevier B.V.
Resumo:
Receive antenna selection (AS) has been shown to maintain the diversity benefits of multiple antennas while potentially reducing hardware costs. However, the promised diversity gains of receive AS depend on the assumptions of perfect channel knowledge at the receiver and slowly time-varying fading. By explicitly accounting for practical constraints imposed by the next-generation wireless standards such as training, packetization and antenna switching time, we propose a single receive AS method for time-varying fading channels. The method exploits the low training overhead and accuracy possible from the use of discrete prolate spheroidal (DPS) sequences based reduced rank subspace projection techniques. It only requires knowledge of the Doppler bandwidth, and does not require detailed correlation knowledge. Closed-form expressions for the channel prediction and estimation error as well as symbol error probability (SEP) of M-ary phase-shift keying (MPSK) for symbol-by-symbol receive AS are also derived. It is shown that the proposed AS scheme, after accounting for the practical limitations mentioned above, outperforms the ideal conventional single-input single-output (SISO) system with perfect CSI and no AS at the receiver and AS with conventional estimation based on complex exponential basis functions.
Resumo:
Practical usage of machine learning is gaining strategic importance in enterprises looking for business intelligence. However, most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a flat form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets. namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Resistance to therapy limits the effectiveness of drug treatment in many diseases. Drug resistance can be considered as a successful outcome of the bacterial struggle to survive in the hostile environment of a drug-exposed cell. An important mechanism by which bacteria acquire drug resistance is through mutations in the drug target. Drug resistant strains (multi-drug resistant and extensively drug resistant) of Mycobacterium tuberculosis are being identified at alarming rates, increasing the global burden of tuberculosis. An understanding of the nature of mutations in different drug targets and how they achieve resistance is therefore important. An objective of this study is to first decipher sequence as well as structural bases for the observed resistance in known drug resistant mutants and then to predict positions in each target that are more prone to acquiring drug resistant mutations. A curated database containing hundreds of mutations in the 38 drug targets of nine major clinical drugs, associated with resistance is studied here. Mutations have been classified into those that occur in the binding site itself, those that occur in residues interacting with the binding site and those that occur in outer zones. Structural models of the wild type and mutant forms of the target proteins have been analysed to seek explanations for reduction in drug binding. Stability analysis of an entire array of 19 mutations at each of the residues for each target has been computed using structural models. Conservation indices of individual residues, binding sites and whole proteins are computed based on sequence conservation analysis of the target proteins. The analyses lead to insights about which positions in the polypeptide chain have a higher propensity to acquire drug resistant mutations. Thus critical insights can be obtained about the effect of mutations on drug binding, in terms of which amino acid positions and therefore which interactions should not be heavily relied upon, which in turn can be translated into guidelines for modifying the existing drugs as well as for designing new drugs. The methodology can serve as a general framework to study drug resistant mutants in other micro-organisms as well.
Resumo:
There has been growing interest in understanding energy metabolism in human embryos generated using assisted reproductive techniques (ART) for improving the overall success rate of the method. Using NMR spectroscopy as a noninvasive tool, we studied human embryo metabolism to identify specific biomarkers to assess the quality of embryos for their implantation potential. The study was based on estimation of pyruvate, lactate and alanine levels in the growth medium, ISM1, used in the culture of embryos. An NMR study involving 127 embryos from 48 couples revealed that embryos transferred on Day 3 (after 72 h in vitro culture) with successful implantation (pregnancy) exhibited significantly (p < 10(-5)) lower pyruvate/alanine ratios compared to those that failed to implant. Lactate levels in media were similar for all embryos. This implies that in addition to lactate production, successfully implanted embryos use pyruvate to produce alanine and other cellular functions. While pyruvate and alanine individually have been used as biomarkers, the present study highlights the potential of combining them to provide a single parameter that correlates strongly with implantation potential. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.
Resumo:
This paper considers the problem of identifying the footprints of communication of multiple transmitters in a given geographical area. To do this, a number of sensors are deployed at arbitrary but known locations in the area, and their individual decisions regarding the presence or absence of the transmitters' signal are combined at a fusion center to reconstruct the spatial spectral usage map. One straightforward scheme to construct this map is to query each of the sensors and cluster the sensors that detect the primary's signal. However, using the fact that a typical transmitter footprint map is a sparse image, two novel compressive sensing based schemes are proposed, which require significantly fewer number of transmissions compared to the querying scheme. A key feature of the proposed schemes is that the measurement matrix is constructed from a pseudo-random binary phase shift applied to the decision of each sensor prior to transmission. The measurement matrix is thus a binary ensemble which satisfies the restricted isometry property. The number of measurements needed for accurate footprint reconstruction is determined using compressive sampling theory. The three schemes are compared through simulations in terms of a performance measure that quantifies the accuracy of the reconstructed spatial spectral usage map. It is found that the proposed sparse reconstruction technique-based schemes significantly outperform the round-robin scheme.
Resumo:
Western Blot analysis is an analytical technique used in Molecular Biology, Biochemistry, Immunogenetics and other Molecular Biology studies to separate proteins by electrophoresis. The procedure results in images containing nearly rectangular-shaped blots. In this paper, we address the problem of quantitation of the blots using automated image processing techniques. We formulate a special active contour (or snake) called Oblong, which locks on to rectangular shaped objects. Oblongs depend on five free parameters, which is also the minimum number of parameters required for a unique characterization. Unlike many snake formulations, Oblongs do not require explicit gradient computations and therefore the optimization is carried out fast. The performance of Oblongs is assessed on synthesized data and Western Blot Analysis images.
Resumo:
Wheel bearings play a crucial role in the mobility of a vehicle by minimizing motive power loss and providing stability in cornering maneuvers. Detailed engineering analysis of a wheel bearing subsystem under dynamic conditions poses enormous challenges due to the nonlinearity of the problem caused by multiple factional contacts between rotating and stationary parts and difficulties in prediction of dynamic loads that wheels are subject to. Commonly used design methodologies are based on equivalent static analysis of ball or roller bearings in which the latter elements may even be represented with springs. In the present study, an advanced hybrid approach is suggested for realistic dynamic analysis of wheel bearings by combining lumped parameter and finite element modeling techniques. A validated lumped parameter representation serves as an efficient tool for the prediction of radial wheel load due to ground reaction which is then used in detailed finite element analysis that automatically accounts for contact forces in an explicit formulation.
Resumo:
The goal of optimization in vehicle design is often blurred by the myriads of requirements belonging to attributes that may not be quite related. If solutions are sought by optimizing attribute performance-related objectives separately starting with a common baseline design configuration as in a traditional design environment, it becomes an arduous task to integrate the potentially conflicting solutions into one satisfactory design. It may be thus more desirable to carry out a combined multi-disciplinary design optimization (MDO) with vehicle weight as an objective function and cross-functional attribute performance targets as constraints. For the particular case of vehicle body structure design, the initial design is likely to be arrived at taking into account styling, packaging and market-driven requirements. The problem with performing a combined cross-functional optimization is the time associated with running such CAE algorithms that can provide a single optimal solution for heterogeneous areas such as NVH and crash safety. In the present paper, a practical MDO methodology is suggested that can be applied to weight optimization of automotive body structures by specifying constraints on frequency and crash performance. Because of the reduced number of cases to be analyzed for crash safety in comparison with other MDO approaches, the present methodology can generate a single size-optimized solution without having to take recourse to empirical techniques such as response surface-based prediction of crash performance and associated successive response surface updating for convergence. An example of weight optimization of spaceframe-based BIW of an aluminum-intensive vehicle is given to illustrate the steps involved in the current optimization process.
Resumo:
Reliable estimates of species density are fundamental to planning conservation strategies for any species; further, it is equally crucial to identify the most appropriate technique to estimate animal density. Nocturnal, small-sized animal species are notoriously difficult to census accurately and this issue critically affects their conservation status, We carried out a field study in southern India to estimate the density of slender loris, a small-sized nocturnal primate using line and strip transects. Actual counts of study individuals yielded a density estimate of 1.61 ha(-1); density estimate from line transects was 1.08 ha(-1); and density estimates varied from 1.06 ha(-1) to 0.59 ha(-1) in different fixed-width strip transects. We conclude that line and strip transects may typically underestimate densities of cryptic, nocturnal primates.
Resumo:
The paper presents a new controller inspired by the human experience based, voluntary body action control (dubbed motor control) learning mechanism. The controller is called Experience Mapping based Prediction Controller (EMPC). EMPC is designed with auto-learning features without the need for the plant model. The core of the controller is formed around the motor action prediction-control mechanism of humans based on past experiential learning with the ability to adapt to environmental changes intelligently. EMPC is utilized for high precision position control of DC motors. The simulation results are presented to show that accurate position control is achieved using EMPC for step and dynamic demands. The performance of EMPC is compared with conventional PD controller and MRAC based position controller under different system conditions. Position Control using EMPC is practically implemented and the results are presented.
Resumo:
Experimental and theoretical studies on degradation of composite-epoxy adhesive joints were carried out on samples having different interfacial and cohesive properties. Oblique incidence ultrasonic inspection of bonded joints revealed that degradation in the adhesive can be measured by significant variation in reflection amplitude as also by a shift in the minima of reflection spectrum. It was observed that severe degradation of the adhesive leads to failure dominated by interfacial mode. Through this investigation it is demonstrated that a correlation exists between the bond strength and a frequency shift in reflection minimum. The experimental data was validated using analytical models. Though both bulk adhesive degradation and interfacial degradation influences the shift in spectrum minimum, the contribution of the latter was found to be significant. An inversion algorithm was used to determine the interfacial transverse stiffness using the experimental oblique reflection spectrum. The spectrum shift was found to depend on the value of interfacial transverse stiffness using which a qualitative assessment can be made on the integrity of the joint.