930 resultados para Linear optimization approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein molecular motors are natural nano-machines that convert the chemical energy from the hydrolysis of adenosine triphosphate into mechanical work. These efficient machines are central to many biological processes, including cellular motion, muscle contraction and cell division. The remarkable energetic efficiency of the protein molecular motors coupled with their nano-scale has prompted an increasing number of studies focusing on their integration in hybrid micro- and nanodevices, in particular using linear molecular motors. The translation of these tentative devices into technologically and economically feasible ones requires an engineering, design-orientated approach based on a structured formalism, preferably mathematical. This contribution reviews the present state of the art in the modelling of protein linear molecular motors, as relevant to the future design-orientated development of hybrid dynamic nanodevices. © 2009 The Royal Society of Chemistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reported homocysteine (HCY) concentrations in human serum show poor concordance amongst laboratories due to endogenous HCY in the matrices used for assay calibrators and QCs. Hence, we have developed a fully validated LC–MS/MS method for measurement of HCY concentrations in human serum samples that addresses this issue by minimising matrix effects. We used small volumes (20 μL) of 2% Bovine Serum Albumin (BSA) as surrogate matrix for making calibrators and QCs with concentrations adjusted for the endogenous HCY concentration in the surrogate matrix using the method of standard additions. To aliquots (20 μL) of human serum samples, calibrators or QCs, were added HCY-d4 (internal standard) and tris-(2-carboxyethyl) phosphine hydrochloride (TCEP) as reducing agent. After protein precipitation, diluted supernatants were injected into the LC–MS/MS. Calibration curves were linear; QCs were accurate (5.6% deviation from nominal), precise (CV% ≤ 9.6%), stable for four freeze–thaw cycles, and when stored at room temperature for 5 h or at −80 °C (27 days). Recoveries from QCs in surrogate matrix or pooled human serum were 91.9 and 95.9%, respectively. There was no matrix effect using 6 different individual serum samples including one that was haemolysed. Our LC–MS/MS method has satisfied all of the validation criteria of the 2012 EMA guideline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the finite element modelling of structural frames, external loads such as wind loads, dead loads and imposed loads usually act along the elements rather than at the nodes only. Conventionally, when an element is subjected to these general transverse element loads, they are usually converted to nodal forces acting at the ends of the elements by either lumping or consistent load approaches. In addition, it is especially important for an element subjected to the first- and second-order elastic behaviour, to which the steel structure is critically prone to; in particular the thin-walled steel structures, when the stocky element section may be generally critical to the inelastic behaviour. In this sense, the accurate first- and second-order elastic displacement solutions of element load effect along an element is vitally crucial, but cannot be simulated using neither numerical nodal nor consistent load methods alone, as long as no equilibrium condition is enforced in the finite element formulation, which can inevitably impair the structural safety of the steel structure particularly. It can be therefore regarded as a unique element load method to account for the element load nonlinearly. If accurate displacement solution is targeted for simulating the first- and second-order elastic behaviour on an element on the basis of sophisticated non-linear element stiffness formulation, the numerous prescribed stiffness matrices must indispensably be used for the plethora of specific transverse element loading patterns encountered. In order to circumvent this shortcoming, the present paper proposes a numerical technique to include the transverse element loading in the non-linear stiffness formulation without numerous prescribed stiffness matrices, and which is able to predict structural responses involving the effect of first-order element loads as well as the second-order coupling effect between the transverse load and axial force in the element. This paper shows that the principle of superposition can be applied to derive the generalized stiffness formulation for element load effect, so that the form of the stiffness matrix remains unchanged with respect to the specific loading patterns, but with only the magnitude of the loading (element load coefficients) being needed to be adjusted in the stiffness formulation, and subsequently the non-linear effect on element loadings can be commensurate by updating the magnitude of element load coefficients through the non-linear solution procedures. In principle, the element loading distribution is converted into a single loading magnitude at mid-span in order to provide the initial perturbation for triggering the member bowing effect due to its transverse element loads. This approach in turn sacrifices the effect of element loading distribution except at mid-span. Therefore, it can be foreseen that the load-deflection behaviour may not be as accurate as those at mid-span, but its discrepancy is still trivial as proved. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. Moreover, another significance of this paper is placed on shifting the nodal response (system analysis) to both nodal and element response (sophisticated element formulation). For the conventional finite element method, such as the cubic element, all accurate solutions can be only found at node. It means no accurate and reliable structural safety can be ensured within an element, and as a result, it hinders the engineering applications. The results of the paper are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple frames.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental studies have found that when the state-of-the-art probabilistic linear discriminant analysis (PLDA) speaker verification systems are trained using out-domain data, it significantly affects speaker verification performance due to the mismatch between development data and evaluation data. To overcome this problem we propose a novel unsupervised inter dataset variability (IDV) compensation approach to compensate the dataset mismatch. IDV-compensated PLDA system achieves over 10% relative improvement in EER values over out-domain PLDA system by effectively compensating the mismatch between in-domain and out-domain data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a clustering-only approach to the problem of speaker diarization to eliminate the need for the commonly employed and computationally expensive Viterbi segmentation and realignment stage. We use multiple linear segmentations of a recording and carry out complete-linkage clustering within each segmentation scenario to obtain a set of clustering decisions for each case. We then collect all clustering decisions, across all cases, to compute a pairwise vote between the segments and conduct complete-linkage clustering to cluster them at a resolution equal to the minimum segment length used in the linear segmentations. We use our proposed cluster-voting approach to carry out speaker diarization and linking across the SAIVT-BNEWS corpus of Australian broadcast news data. We compare our technique to an equivalent baseline system with Viterbi realignment and show that our approach can outperform the baseline technique with respect to the diarization error rate (DER) and attribution error rate (AER).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce the Stochastic Adams-Bashforth (SAB) and Stochastic Adams-Moulton (SAM) methods as an extension of the tau-leaping framework to past information. Using the theta-trapezoidal tau-leap method of weak order two as a starting procedure, we show that the k-step SAB method with k >= 3 is order three in the mean and correlation, while a predictor-corrector implementation of the SAM method is weak order three in the mean but only order one in the correlation. These convergence results have been derived analytically for linear problems and successfully tested numerically for both linear and non-linear systems. A series of additional examples have been implemented in order to demonstrate the efficacy of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents stylized models for conducting performance analysis of the manufacturing supply chain network (SCN) in a stochastic setting for batch ordering. We use queueing models to capture the behavior of SCN. The analysis is clubbed with an inventory optimization model, which can be used for designing inventory policies . In the first case, we model one manufacturer with one warehouse, which supplies to various retailers. We determine the optimal inventory level at the warehouse that minimizes total expected cost of carrying inventory, back order cost associated with serving orders in the backlog queue, and ordering cost. In the second model we impose service level constraint in terms of fill rate (probability an order is filled from stock at warehouse), assuming that customers do not balk from the system. We present several numerical examples to illustrate the model and to illustrate its various features. In the third case, we extend the model to a three-echelon inventory model which explicitly considers the logistics process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a surprising cooperative adsorption process observed by scanning tunneling microscopy (STM) at the liquid−solid interface. The process involves the association of a threefold hydrogen-bonding unit, trimesic acid (TMA), with straight-chain aliphatic alcohols of varying length (from C7 to C30), which coadsorb on highly oriented pyrolytic graphite (HOPG) to form linear patterns. In certain cases, the known TMA “flower pattern” can coexist temporarily with the linear TMA−alcohol patterns, but it eventually disappears. Time-lapsed STM imaging shows that the evolution of the flower pattern is a classical ripening phenomenon. The periodicity of the linear TMA−alcohol patterns can be modulated by choosing alcohols with appropriate chain lengths, and the precise structure of the patterns depends on the parity of the carbon count in the alkyl chain. Interactions that lead to this odd−even effect are analyzed in detail. The molecular components of the patterns are achiral, yet their association by hydrogen bonding leads to the formation of enantiomeric domains on the surface. The interrelation of these domains and the observation of superperiodic structures (moiré patterns) are rationalized by considering interactions with the underlying graphite surface and within the two-dimensional crystal of the adsorbed molecules. Comparison of the observed two-dimensional structures with the three-dimensional crystal structures of TMA−alcohol complexes determined by X-ray crystallography helps reveal the mechanism of molecular association in these two-component systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two beetle-type scanning tunneling microscopes are described. Both designs have the thermal stability of the Besocke beetle and the simplicity of the Wilms beetle. Moreover, sample holders were designed that also allow both semiconductor wafers and metal single crystals to be studied. The coarse approach is a linear motion of the beetle towards the sample using inertial slip–stick motion. Ten wires are required to control the position of the beetle and scanner and measure the tunneling current. The two beetles were built with different sized piezolegs, and the vibrational properties of both beetles were studied in detail. It was found, in agreement with previous work, that the beetle bending mode is the lowest principal eigenmode. However, in contrast to previous vibrational studies of beetle-type scanning tunneling microscopes, we found that the beetles did not have the “rattling” modes that are thought to arise from the beetle sliding or rocking between surface asperities on the raceway. The mass of our beetles is 3–4 times larger than the mass of beetles where rattling modes have been observed. We conjecture that the mass of our beetles is above a “critical beetle mass.” This is defined to be the beetle mass that attenuates the rattling modes by elastically deforming the contact region to the extent that the rattling modes cannot be identified as distinct modes in cross-coupling measurements.