923 resultados para Search Engine Optimization Methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combining data from multiple analytical platforms is essential for comprehensive study of the molecular phenotype (metabotype) of a given biological sample. The metabolite profiles generated are intrinsically dependent on the analytical platforms, each requiring optimization of instrumental parameters, separation conditions, and sample extraction to deliver maximal biological information. An in-depth evaluation of extraction protocols for characterizing the metabolome of the hepatobiliary fluke Fasciola hepatica, using ultra performance liquid chromatography and capillary electrophoresis coupled with mass spectroscopy is presented. The spectrometric methods were characterized by performance, and metrics of merit were established, including precision, mass accuracy, selectivity, sensitivity, and platform stability. Although a core group of molecules was common to all methods, each platform contributed a unique set, whereby 142 metabolites out of 14,724 features were identified. A mixture design revealed that the chloroform:methanol:water proportion of 15:59:26 was globally the best composition for metabolite extraction across UPLC-MS and CE-MS platforms accommodating different columns and ionization modes. Despite the general assumption of the necessity of platform-adapted protocols for achieving effective metabotype characterization, we show that an appropriately designed single extraction procedure is able to fit the requirements of all technologies. This may constitute a paradigm shift in developing efficient protocols for high-throughput metabolite profiling with more-general analytical applicability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work was to perform a systematic study of the parameters that can influence the composition, morphology, and catalytic activity of PtSn/C nanoparticles and compare two different methods of nanocatalyst preparation, namely microwave-assisted heating (MW) and thermal decomposition of polymeric precursors (DPP). An investigation of the effects of the reducing and stabilizing agents on the catalytic activity and morphology of Pt75Sn25/C catalysts prepared by microwave-assisted heating was undertaken for optimization purposes. The effect of short-chain alcohols such as ethanol, ethylene glycol, and propylene glycol as reducing agents was evaluated, and the use of sodium acetate and citric acid as stabilizing agents for the MW procedure was examined. Catalysts obtained from propylene glycol displayed higher catalytic activity compared with catalysts prepared in ethylene glycol. Introduction of sodium acetate enhanced the catalytic activity, but this beneficial effect was observed until a critical acetate concentration was reached. Optimization of the MW synthesis allowed for the preparation of highly dispersed catalysts with average sizes lying between 2.0 and 5.0 nm. Comparison of the best catalyst prepared by MW with a catalyst of similar composition prepared by the polymeric precursors method showed that the catalytic activity of the material can be improved when a proper condition for catalyst preparation is achieved. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synchronous telecommunication networks, distributed control systems and integrated circuits have its accuracy of operation dependent on the existence of a reliable time basis signal extracted from the line data stream and acquirable to each node. In this sense, the existence of a sub-network (inside the main network) dedicated to the distribution of the clock signals is crucially important. There are different solutions for the architecture of the time distribution sub-network and choosing one of them depends on cost, precision, reliability and operational security. In this work we expose: (i) the possible time distribution networks and their usual topologies and arrangements. (ii) How parameters of the network nodes can affect the reachability and stability of the synchronous state of a network. (iii) Optimizations methods for synchronous networks which can provide low cost architectures with operational precision, reliability and security. (C) 2011 Elsevier B. V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide variety of molecular architectures used in sensors and biosensors and the large amount of data generated with some principles of detection have motivated the use of computational methods, such as information visualization techniques, not only to handle the data but also to optimize sensing performance. In this study, we combine projection techniques with micro-Raman scattering and atomic force microscopy (AFM) to address critical issues related to practical applications of electronic tongues (e-tongues) based on impedance spectroscopy. Experimentally, we used sensing units made with thin films of a perylene derivative (AzoPTCD acronym), coating Pt interdigitated electrodes, to detect CuCl(2) (Cu(2+)), methylene blue (MB), and saccharose in aqueous solutions, which were selected due to their distinct molecular sizes and ionic character in solution. The AzoPTCD films were deposited from monolayers to 120 nm via Langmuir-Blodgett (LB) and physical vapor deposition (PVD) techniques. Because the main aspects investigated were how the interdigitated electrodes are coated by thin films (architecture on e-tongue) and the film thickness, we decided to employ the same material for all sensing units. The capacitance data were projected into a 2D plot using the force scheme method, from which we could infer that at low analyte concentrations the electrical response of the units was determined by the film thickness. Concentrations at 10 mu M or higher could be distinguished with thinner films tens of nanometers at most-which could withstand the impedance measurements, and without causing significant changes in the Raman signal for the AzoPTCD film-forming molecules. The sensitivity to the analytes appears to be related to adsorption on the film surface, as inferred from Raman spectroscopy data using MB as analyte and from the multidimensional projections. The analysis of the results presented may serve as a new route to select materials and molecular architectures for novel sensors and biosensors, in addition to suggesting ways to unravel the mechanisms behind the high sensitivity obtained in various sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thorough search of the sky exposed at the Pierre Auger Cosmic Ray Observatory reveals no statistically significant excess of events in any small solid angle that would be indicative of a flux of neutral particles from a discrete source. The search covers from -90 degrees to +15 degrees in declination using four different energy ranges above 1 EeV (10(18) eV). The method used in this search is more sensitive to neutrons than to photons. The upper limit on a neutron flux is derived for a dense grid of directions for each of the four energy ranges. These results constrain scenarios for the production of ultrahigh energy cosmic rays in the Galaxy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss the detection of glucose and triglycerides using information visualization methods to process impedance spectroscopy data. The sensing units contained either lipase or glucose oxidase immobilized in layer-by-layer (LbL) films deposited onto interdigitated electrodes. The optimization consisted in identifying which part of the electrical response and combination of sensing units yielded the best distinguishing ability. It is shown that complete separation can be obtained for a range of concentrations of glucose and triglyceride when the interactive document map (IDMAP) technique is used to project the data into a two-dimensional plot. Most importantly, the optimization procedure can be extended to other types of biosensors, thus increasing the versatility of analysis provided by tailored molecular architectures exploited with various detection principles. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Sepsis is a common condition encountered in hospital environments. There is no effective treatment for sepsis, and it remains an important cause of death at intensive care units. This study aimed to discuss some methods that are available in clinics, and tests that have been recently developed for the diagnosis of sepsis. Methods: A systematic review was performed through the analysis of the following descriptors: sepsis, diagnostic methods, biological markers, and cytokines. Results: The deleterious effects of sepsis are caused by an imbalance between the invasiveness of the pathogen and the ability of the host to mount an effective immune response. Consequently, the host's immune surveillance fails to eliminate the pathogen, allowing it to spread. Moreover, there is a pro-inflammatory mediator release, inappropriate activation of the coagulation and complement cascades, leading to dysfunction of multiple organs and systems. The difficulty achieve total recovery of the patient is explainable. There is an increased incidence of sepsis worldwide due to factors such as aging population, larger number of surgeries, and number of microorganisms resistant to existing antibiotics. Conclusion: The search for new diagnostic markers associated with increased risk of sepsis development and molecules that can be correlated to certain steps of sepsis is becoming necessary. This would allow for earlier diagnosis, facilitate patient prognosis characterization, and prediction of possible evolution of each case. All other markers are regrettably constrained to research units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor and actuator based on laminated piezocomposite shells have shown increasing demand in the field of smart structures. The distribution of piezoelectric material within material layers affects the performance of these structures; therefore, its amount, shape, size, placement, and polarization should be simultaneously considered in an optimization problem. In addition, previous works suggest the concept of laminated piezocomposite structure that includes fiber-reinforced composite layer can increase the performance of these piezoelectric transducers; however, the design optimization of these devices has not been fully explored yet. Thus, this work aims the development of a methodology using topology optimization techniques for static design of laminated piezocomposite shell structures by considering the optimization of piezoelectric material and polarization distributions together with the optimization of the fiber angle of the composite orthotropic layers, which is free to assume different values along the same composite layer. The finite element model is based on the laminated piezoelectric shell theory, using the degenerate three-dimensional solid approach and first-order shell theory kinematics that accounts for the transverse shear deformation and rotary inertia effects. The topology optimization formulation is implemented by combining the piezoelectric material with penalization and polarization model and the discrete material optimization, where the design variables describe the amount of piezoelectric material and polarization sign at each finite element, with the fiber angles, respectively. Three different objective functions are formulated for the design of actuators, sensors, and energy harvesters. Results of laminated piezocomposite shell transducers are presented to illustrate the method. Copyright (C) 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrothermomechanical MEMS are essentially microactuators that operate based on the thermoelastic effect induced by the Joule heating of the structure. They can be easily fabricated and require relatively low excitation voltages. However, the actuation time of an electrothermomechanical microdevice is higher than the actuation times related to electrostatic and piezoelectric actuation principles. Thus, in this research, we propose an optimization framework based on the topology optimization method applied to transient problems, to design electrothermomechanical microactuators for response time reduction. The objective is to maximize the integral of the output displacement of the actuator, which is a function of time. The finite element equations that govern the time response of the actuators are provided. Furthermore, the Solid Isotropic Material with Penalization model and Sequential Linear Programming are employed. Finally, a smoothing filter is implemented to control the solution. Results aiming at two distinct applications suggest the proposed approach can provide more than 50% faster actuators. (C) 2012 Elsevier B.V. All rights reserved.