895 resultados para Anchoring heuristic
Resumo:
The interaction between the digital human model (DHM) and environment typically occurs in two distinct modes; one, when the DHM maintains contacts with the environment using its self weight, wherein associated reaction forces at the interface due to gravity are unidirectional; two, when the DHM applies both tension and compression on the environment through anchoring. For static balancing in first mode of interaction, it is sufficient to maintain the projection of the centre of mass (COM) inside the convex region induced by the weight supporting segments of the body on a horizontal plane. In DHM, static balancing is required while performing specified tasks such as reach, manipulation and locomotion; otherwise the simulations would not be realistic. This paper establishes the geometric relationships that must be satisfied for maintaining static balance while altering the support configurations for a given posture and altering the posture for a given support condition. For a given location of the COM for a system supported by multiple point contacts, the conditions for simultaneous withdrawal of a specified set of contacts have been determined in terms of the convex hulls of the subsets of the points of contact. When the projection of COM must move beyond the existing support for performing some task, new supports must be enabled for maintaining static balance. This support seeking behavior could also manifest while planning for reduction of support stresses. Feasibility of such a support depends upon the availability of necessary features in the environment. Geometric conditions necessary for selection of new support on horizontal,inclined and vertical surfaces within the workspace of the DHM for such dynamic scenario have been derived. The concepts developed are demonstrated using the cases of sit-to-stand posture transition for manipulation of COM within the convex supporting polygon, and statically stable walking gaits for support seeking within the kinematic capabilities of the DHM. The theory developed helps in making the DHM realize appropriate behaviors in diverse scenarios autonomously.
Resumo:
In the underlay mode of cognitive radio, secondary users are allowed to transmit when the primary is transmitting, but under tight interference constraints that protect the primary. However, these constraints limit the secondary system performance. Antenna selection (AS)-based multiple antenna techniques, which exploit spatial diversity with less hardware, help improve secondary system performance. We develop a novel and optimal transmit AS rule that minimizes the symbol error probability (SEP) of an average interference-constrained multiple-input-single-output secondary system that operates in the underlay mode. We show that the optimal rule is a non-linear function of the power gain of the channel from the secondary transmit antenna to the primary receiver and from the secondary transmit antenna to the secondary receive antenna. We also propose a simpler, tractable variant of the optimal rule that performs as well as the optimal rule. We then analyze its SEP with L transmit antennas, and extensively benchmark it with several heuristic selection rules proposed in the literature. We also enhance these rules in order to provide a fair comparison, and derive new expressions for their SEPs. The results bring out new inter-relationships between the various rules, and show that the optimal rule can significantly reduce the SEP.
Resumo:
Clustering has been the most popular method for data exploration. Clustering is partitioning the data set into sub-partitions based on some measures say the distance measure, each partition has its own significant information. There are a number of algorithms explored for this purpose, one such algorithm is the Particle Swarm Optimization(PSO) which is a population based heuristic search technique derived from swarm intelligence. In this paper we present an improved version of the Particle Swarm Optimization where, each feature of the data set is given significance accordingly by adding some random weights, which also minimizes the distortions in the dataset if any. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The experimental results shows that our proposed methodology performs significantly better than the previously performed experiments.
Resumo:
Water-dispersible, photocatalytic Fe3O4@TiO2 core shell magnetic nanoparticles have been prepared by anchoring cyclodextrin cavities to the TiO2 shell, and their ability to capture and photocatalytically destroy endocrine-disrupting chemicals, bisphenol A and dibutyl phthalate, present in water, has been demonstrated. The functionalized nanoparticles can be magnetically separated from the dispersion after photocatalysis and hence reused. Each component of the cyclodextrin-functionalized Fe3O4@TiO2 core shell nanoparticle has a crucial role in its functioning. The tethered cyclodextrins are responsible for the aqueous dispersibility of the nanoparticles and their hydrophobic cavities for the capture of the organic pollutants that may be present in water samples. The amorphous TiO2 shell is the photocatalyst for the degradation and mineralization of the organics, bisphenol A and dibutyl phthalate, under UV illumination, and the magnetism associated with the 9 nm crystalline Fe3O4 core allows for the magnetic separation from the dispersion once photocatalytic degradation is complete. An attractive feature of these ``capture and destroy'' nanomaterials is that they may be completely removed from the dispersion and reused with little or no loss of catalytic activity.
Resumo:
Introduction: For over half a century now, the dopamine hypothesis has provided the most widely accepted heuristic model linking pathophysiology and treatment in schizophrenia. Despite dopaminergic drugs being available for six decades, this system continues to represent a key target in schizophrenia drug discovery. The present article reviews the scientific rationale for dopaminergic medications historically and the shift in our thinking since, which is clearly reflected in the investigational drugs detailed. Areas covered: We searched for investigational drugs using the key words `dopamine,' `schizophrenia,' and `Phase II' in American and European clinical trial registers (clinicaltrials. gov; clinicaltrialsregister.eu), published articles using National Library of Medicine's PubMed database, and supplemented results with a manual search of cross-references and conference abstracts. We provide a brief description of drugs targeting dopamine synthesis, release or metabolism, and receptors (agonists/partial agonists/antagonists). Expert opinion: There are prominent shifts in how we presently conceptualize schizophrenia and its treatment. Current efforts are not as much focused on developing better antipsychotics but, instead, on treatments that can improve other symptom domains, in particular cognitive and negative. This new era in the pharmacotherapy of schizophrenia moves us away from the older `magic bullet' approach toward a strategy fostering polypharmacy and a more individualized approach shaped by the individual's specific symptom profile.
Resumo:
Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning and data mining. Clustering is grouping of a data set or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait according to some defined distance measure. In this paper we present the genetically improved version of particle swarm optimization algorithm which is a population based heuristic search technique derived from the analysis of the particle swarm intelligence and the concepts of genetic algorithms (GA). The algorithm combines the concepts of PSO such as velocity and position update rules together with the concepts of GA such as selection, crossover and mutation. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The performance of our method is better than k-means and PSO algorithm.
Resumo:
We consider the problem of characterizing the minimum average delay, or equivalently the minimum average queue length, of message symbols randomly arriving to the transmitter queue of a point-to-point link which dynamically selects a (n, k) block code from a given collection. The system is modeled by a discrete time queue with an IID batch arrival process and batch service. We obtain a lower bound on the minimum average queue length, which is the optimal value for a linear program, using only the mean (λ) and variance (σ2) of the batch arrivals. For a finite collection of (n, k) codes the minimum achievable average queue length is shown to be Θ(1/ε) as ε ↓ 0 where ε is the difference between the maximum code rate and λ. We obtain a sufficient condition for code rate selection policies to achieve this optimal growth rate. A simple family of policies that use only one block code each as well as two other heuristic policies are shown to be weakly optimal in the sense of achieving the 1/ε growth rate. An appropriate selection from the family of policies that use only one block code each is also shown to achieve the optimal coefficient σ2/2 of the 1/ε growth rate. We compare the performance of the heuristic policies with the minimum achievable average queue length and the lower bound numerically. For a countable collection of (n, k) codes, the optimal average queue length is shown to be Ω(1/ε). We illustrate the selectivity among policies of the growth rate optimality criterion for both finite and countable collections of (n, k) block codes.
Resumo:
In this paper, we consider the setting of the pattern maximum likelihood (PML) problem studied by Orlitsky et al. We present a well-motivated heuristic algorithm for deciding the question of when the PML distribution of a given pattern is uniform. The algorithm is based on the concept of a ``uniform threshold''. This is a threshold at which the uniform distribution exhibits an interesting phase transition in the PML problem, going from being a local maximum to being a local minimum.
Minimizing total weighted tardiness on heterogeneous batch processors with incompatible job families
Resumo:
In this paper, we address a scheduling problem for minimizing total weighted tardiness. The background for the paper is derived from the automobile gear manufacturing process. We consider the bottleneck operation of heat treatment stage of gear manufacturing. Real-life scenarios like unequal release times, incompatible job families, nonidentical job sizes, heterogeneous batch processors, and allowance for job splitting have been considered. We have developed a mathematical model which takes into account dynamic starting conditions. The problem considered in this study is NP-hard in nature, and hence heuristic algorithms have been proposed to address it. For real-life large-size problems, the performance of the proposed heuristic algorithms is evaluated using the method of estimated optimal solution available in literature. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently obtaining near-optimal statistically estimated solutions in very reasonable computational time.
Resumo:
Information spreading in a population can be modeled as an epidemic. Campaigners (e.g., election campaign managers, companies marketing products or movies) are interested in spreading a message by a given deadline, using limited resources. In this paper, we formulate the above situation as an optimal control problem and the solution (using Pontryagin's Maximum Principle) prescribes an optimal resource allocation over the time of the campaign. We consider two different scenarios-in the first, the campaigner can adjust a direct control (over time) which allows her to recruit individuals from the population (at some cost) to act as spreaders for the Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we allow the campaigner to adjust the effective spreading rate by incentivizing the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to the direct recruitment. We consider time varying information spreading rate in our formulation to model the changing interest level of individuals in the campaign, as the deadline is reached. In both the cases, we show the existence of a solution and its uniqueness for sufficiently small campaign deadlines. For the fixed spreading rate, we show the effectiveness of the optimal control strategy against the constant control strategy, a heuristic control strategy and no control. We show the sensitivity of the optimal control to the spreading rate profile when it is time varying. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Multiple copies of a gene require enhanced investment on the part of the cell and, as such, call for an explanation. The observation that Escherichia coli has four copies of initiator tRNA (tRNA(i)) genes, encoding a special tRNA (tRNA(fMet)) required to start protein synthesis, is puzzling particularly because the cell appears to be unaffected by the removal of one copy. However, the fitness of an organism has both absolute and relative connotations. Thus, we carried out growth competition experiments between E. coli strains that differ in the number of tRNA(i) genes they contain. This has enabled us to uncover an unexpected link between the number of tRNA(i) genes and protein synthesis, nutritional status, and fitness. Wild-type strains with the canonical four tRNA(i) genes are favored in nutrient-rich environments, and those carrying fewer are favored in nutrient-poor environments. Auxotrophs behave as if they have a nutritionally poor internal environment. A heuristic model that links tRNA(i) gene copy number, genetic stress, and growth rate accounts for the findings. Our observations provide strong evidence that natural selection can work through seemingly minor quantitative variations in gene copy number and thereby impact organismal fitness.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this work, we address the recovery of block sparse vectors with intra-block correlation, i.e., the recovery of vectors in which the correlated nonzero entries are constrained to lie in a few clusters, from noisy underdetermined linear measurements. Among Bayesian sparse recovery techniques, the cluster Sparse Bayesian Learning (SBL) is an efficient tool for block-sparse vector recovery, with intra-block correlation. However, this technique uses a heuristic method to estimate the intra-block correlation. In this paper, we propose the Nested SBL (NSBL) algorithm, which we derive using a novel Bayesian formulation that facilitates the use of the monotonically convergent nested Expectation Maximization (EM) and a Kalman filtering based learning framework. Unlike the cluster-SBL algorithm, this formulation leads to closed-form EMupdates for estimating the correlation coefficient. We demonstrate the efficacy of the proposed NSBL algorithm using Monte Carlo simulations.
Resumo:
Since the time of Kirkwood, observed deviations in magnitude of the dielectric constant of aqueous protein solution from that of neat water (similar to 80) and slower decay of polarization have been subjects of enormous interest, controversy, and debate. Most of the common proteins have large permanent dipole moments (often more than 100 D) that can influence structure and dynamics of even distant water molecules, thereby affecting collective polarization fluctuation of the solution, which in turn can significantly alter solution's dielectric constant. Therefore, distance dependence of polarization fluctuation can provide important insight into the nature of biological water. We explore these aspects by studying aqueous solutions of four different proteins of different characteristics and varying sizes, chicken villin headpiece subdomain (HP-36), immunoglobulin binding domain protein G (GB1), hen-egg white lysozyme (LYS), and Myoglobin (MYO). We simulate fairly large systems consisting of single protein molecule and 20000-30000 water molecules (varied according to the protein size), providing a concentration in the range of similar to 2-3 mM. We find that the calculated dielectric constant of the system shows a noticeable increment in all the cases compared to that of neat water. Total dipole moment auto time correlation function of water < dM(W) (0)delta M-W (t) > is found to be sensitive to the nature of the protein. Surprisingly, dipole moment of the protein and total dipole moment of the water molecules are found to be only weakly coupled. Shellwise decomposition of water molecules around protein reveals higher density of first layer compared to the succeeding ones. We also calculate heuristic effective dielectric constant of successive layers and find that the layer adjacent to protein has much lower value (similar to 50). However, progressive layers exhibit successive increment of dielectric constant, finally reaching a value close to that of bulk 4-5 layers away. We also calculate shellwise orientational correlation function and tetrahedral order parameter to understand the local dynamics and structural re-arrangement of water. Theoretical analysis providing simple method for calculation of shellwise local dielectric constant and implication of these findings are elaborately discussed in the present work. (C) 2014 AIP Publishing LLC.
Resumo:
Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for intertask synchronization. For example, OpenMP 4.0 will integrate data-driven execution semantics derived from the StarSs research language. Compared to the more restrictive data-parallel and fork-join concurrency models, the advanced features being introduced into task-parallelmodels in turn enable improved scalability through load balancing, memory latency hiding, mitigation of the pressure on memory bandwidth, and, as a side effect, reduced power consumption. In this article, we develop a systematic approach to compile loop nests into concurrent, dynamically constructed graphs of dependent tasks. We propose a simple and effective heuristic that selects the most profitable parallelization idiom for every dependence type and communication pattern. This heuristic enables the extraction of interband parallelism (cross-barrier parallelism) in a number of numerical computations that range from linear algebra to structured grids and image processing. The proposed static analysis and code generation alleviates the burden of a full-blown dependence resolver to track the readiness of tasks at runtime. We evaluate our approach and algorithms in the PPCG compiler, targeting OpenStream, a representative dataflow task-parallel language with explicit intertask dependences and a lightweight runtime. Experimental results demonstrate the effectiveness of the approach.