937 resultados para ISE and ITSE optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential energy surface for the first step of the alkaline hydrolysis of methyl acetate was explored by a variety of methods. The conformational search routine within SPARTAN was used to determine the lowest energy am1 and pm3 structures for the anionic tetrahedral intermediate. Ab initio single point and geometry optimization calculations were performed to determine the lowest energy conformer, and the linear synchronous transition (lst) method was used to provide an initial structure for transition state optimization. Transition states were obtained at the am1, pm3, 3-21G, and 3-21 + G levels of theory. These transition states were compared with the anionic tetrahedral intermediates to examine the assumption that the intermediate is a good model for the transition state. In addition, the Cramer/Truhlar sm3 solvation model was used at the semiempirical level to compare gas phase and aqueous alkaline hydrolysis of methyl acetate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intent of the work presented in this thesis is to show that relativistic perturbations should be considered in the same manner as well known perturbations currently taken into account in planet-satellite systems. It is also the aim of this research to show that relativistic perturbations are comparable to standard perturbations in speciffc force magnitude and effects. This work would have been regarded as little more then a curiosity to most engineers until recent advancements in space propulsion methods { e.g. the creation of a artiffcial neutron stars, light sails, and continuous propulsion techniques. These cutting-edge technologies have the potential to thrust the human race into interstellar, and hopefully intergalactic, travel in the not so distant future. The relativistic perturbations were simulated on two orbit cases: (1) a general orbit and (2) a Molniya type orbit. The simulations were completed using Matlab's ODE45 integration scheme. The methods used to organize, execute, and analyze these simulations are explained in detail. The results of the simulations are presented in graphical and statistical form. The simulation data reveals that the speciffc forces that arise from the relativistic perturbations do manifest as variations in the classical orbital elements. It is also apparent from the simulated data that the speciffc forces do exhibit similar magnitudes and effects that materialize from commonly considered perturbations that are used in trajectory design, optimization, and maintenance. Due to the similarities in behavior of relativistic versus non-relativistic perturbations, a case is made for the development of a fully relativistic formulation for the trajectory design and trajectory optimization problems. This new framework would afford the possibility of illuminating new more optimal solutions to the aforementioned problems that do not arise in current formulations. This type of reformulation has already showed promise when the previously unknown Space Superhighways arose as a optimal solution when classical astrodynamics was reformulated using geometric mechanics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies the minimization of the fuel consumption for a Hybrid Electric Vehicle (HEV) using Model Predictive Control (MPC). The presented MPC – based controller calculates an optimal sequence of control inputs to a hybrid vehicle using the measured plant outputs, the current dynamic states, a system model, system constraints, and an optimization cost function. The MPC controller is developed using Matlab MPC control toolbox. To evaluate the performance of the presented controller, a power-split hybrid vehicle, 2004 Toyota Prius, is selected. The vehicle uses a planetary gear set to combine three power components, an engine, a motor, and a generator, and transfer energy from these components to the vehicle wheels. The planetary gear model is developed based on the Willis’s formula. The dynamic models of the engine, the motor, and the generator, are derived based on their dynamics at the planetary gear. The MPC controller for HEV energy management is validated in the MATLAB/Simulink environment. Both the step response performance (a 0 – 60 mph step input) and the driving cycle tracking performance are evaluated. Two standard driving cycles, Urban Dynamometer Driving Schedule (UDDS) and Highway Fuel Economy Driving Schedule (HWFET), are used in the evaluation tests. For the UDDS and HWFET driving cycles, the simulation results, the fuel consumption and the battery state of charge, using the MPC controller are compared with the simulation results using the original vehicle model in Autonomie. The MPC approach shows the feasibility to improve vehicle performance and minimize fuel consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As microgrid power systems gain prevalence and renewable energy comprises greater and greater portions of distributed generation, energy storage becomes important to offset the higher variance of renewable energy sources and maximize their usefulness. One of the emerging techniques is to utilize a combination of lead-acid batteries and ultracapacitors to provide both short and long-term stabilization to microgrid systems. The different energy and power characteristics of batteries and ultracapacitors imply that they ought to be utilized in different ways. Traditional linear controls can use these energy storage systems to stabilize a power grid, but cannot effect more complex interactions. This research explores a fuzzy logic approach to microgrid stabilization. The ability of a fuzzy logic controller to regulate a dc bus in the presence of source and load fluctuations, in a manner comparable to traditional linear control systems, is explored and demonstrated. Furthermore, the expanded capabilities (such as storage balancing, self-protection, and battery optimization) of a fuzzy logic system over a traditional linear control system are shown. System simulation results are presented and validated through hardware-based experiments. These experiments confirm the capabilities of the fuzzy logic control system to regulate bus voltage, balance storage elements, optimize battery usage, and effect self-protection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of ERa, PgR and HER2 status is routinely performed today to determine the endocrine responsiveness of breast cancer samples. Such determination is usually accomplished by means of immunohistochemistry and in case of HER2 amplification by means of fluorescent in situ hybridization (FISH). The analysis of these markers can be improved by simultaneous measurements using quantitative real-time PCR (Qrt-PCR). In this study we compared Qrt-PCR results for the assessment of mRNA levels of ERa, PgR, and the members of the human epidermal growth factor receptor family, HER1, HER2, HER3 and HER4. The results were obtained in two independent laboratories using two different methods, SYBR Green I and TaqMan probes, and different primers. By linear regression we demonstrated a good concordance for all six markers. The quantitative mRNA expression levels of ERa, PgR and HER2 also strongly correlated with the respective quantitative protein expression levels prospectively detected by EIA in both laboratories. In addition, HER2 mRNA expression levels correlated well with gene amplification detected by FISH in the same biopsies. Our results indicate that both Qrt-PCR methods were robust and sensitive tools for routine diagnostics and consistent with standard methodologies. The developed simultaneous assessment of several biomarkers is fast and labor effective and allows optimization of the clinical decision-making process in breast cancer tissue and/or core biopsies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces an area- and power-efficient approach for compressive recording of cortical signals used in an implantable system prior to transmission. Recent research on compressive sensing has shown promising results for sub-Nyquist sampling of sparse biological signals. Still, any large-scale implementation of this technique faces critical issues caused by the increased hardware intensity. The cost of implementing compressive sensing in a multichannel system in terms of area usage can be significantly higher than a conventional data acquisition system without compression. To tackle this issue, a new multichannel compressive sensing scheme which exploits the spatial sparsity of the signals recorded from the electrodes of the sensor array is proposed. The analysis shows that using this method, the power efficiency is preserved to a great extent while the area overhead is significantly reduced resulting in an improved power-area product. The proposed circuit architecture is implemented in a UMC 0.18 [Formula: see text]m CMOS technology. Extensive performance analysis and design optimization has been done resulting in a low-noise, compact and power-efficient implementation. The results of simulations and subsequent reconstructions show the possibility of recovering fourfold compressed intracranial EEG signals with an SNR as high as 21.8 dB, while consuming 10.5 [Formula: see text]W of power within an effective area of 250 [Formula: see text]m × 250 [Formula: see text]m per channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Spinal disc herniation, lumbar spinal stenosis and spondylolisthesis are known to be leading causes of lumbar back pain. The cost of low back pain management and related operations are continuously increasing in the healthcare sector. There are many studies regarding complications after spine surgery but little is known about the factors predicting the length of stay in hospital. The purpose of this study was to identify these factors in lumbar spine surgery in order to adapt the postoperative treatment. MATERIAL AND METHODS The current study was carried out as a post hoc analysis on the basis of the German spine registry. Patients who underwent lumbar spine surgery by posterior surgical access and with posterior fusion and/or rigid stabilization, whereby procedures with dynamic stabilization were excluded. Patient characteristics were tested for association with length of stay (LOS) using bivariate and multivariate analyses. RESULTS A total of 356 patients met the inclusion criteria. The average age of all patients was 64.6 years and the mean LOS was 11.9 ± 6.0 days with a range of 2-44 days. Independent factors that were influencing LOS were increased age at the time of surgery, higher body mass index, male gender, blood transfusion of 1-2 erythrocyte concentrates and the presence of surgical complications. CONCLUSION Identification of predictive factors for prolonged LOS may allow for estimation of patient hospitalization time and for optimization of postoperative care. In individual cases this may result of a reduction in the LOS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development of interfaces for sample introduction from high pressures is important for real-time online hyphenation of chromatographic and other separation devices with mass spectrometry (MS) or accelerator mass spectrometry (AMS). Momentum separators can reduce unwanted low-density gases and introduce the analyte into the vacuum. In this work, the axial jet separator, a new momentum interface, is characterized by theory and empirical optimization. The mathematical model describes the different axial penetration of the components of a jetgas mixture and explains the empirical results for injections of CO2 in helium into MS and AMS instruments. We show that the performance of the new interface is sensitive to the nozzle size, showing good qualitative agreement with the mathematical model. Smaller nozzle sizes are more preferable due to their higher inflow capacity. The CO2 transmission efficiency of the interface into a MS instrument is ~14% (CO2/helium separation factor of 2.7). The interface receives and delivers flows of ~17.5 mL/min and ~0.9 mL/min, respectively. For the interfaced AMS instrument, the ionization and overall efficiencies are 0.7-3% and 0.1-0.4%, respectively, for CO2 amounts of 4-0.6 µg C, which is only slightly lower compared to conventional systems using intermediate trapping. The ionization efficiency depends on to the carbon mass flow in the injected pulse and is suppressed at high CO2 flows. Relative to a conventional jet separator, the transmission efficiency of the axial jet separator is lower, but its performance is less sensitive to misalignments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent study by Rozvany and Sokól discussed an important topic in structural design: the allowance for support costs in the optimization process. This paper examines a frequently used kind of support —that of simple foundation with horizontal reaction by friction— that appears no covered for the Authors’ approach. A simple example is examined to illustrate the case and to apply the Authors’ method and the standard design method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single core capabilities have reached their maximum clock speed; new multicore architectures provide an alternative way to tackle this issue instead. The design of decoding applications running on top of these multicore platforms and their optimization to exploit all system computational power is crucial to obtain best results. Since the development at the integration level of printed circuit boards are increasingly difficult to optimize due to physical constraints and the inherent increase in power consumption, development of multiprocessor architectures is becoming the new Holy Grail. In this sense, it is crucial to develop applications that can run on the new multi-core architectures and find out distributions to maximize the potential use of the system. Today most of commercial electronic devices, available in the market, are composed of embedded systems. These devices incorporate recently multi-core processors. Task management onto multiple core/processors is not a trivial issue, and a good task/actor scheduling can yield to significant improvements in terms of efficiency gains and also processor power consumption. Scheduling of data flows between the actors that implement the applications aims to harness multi-core architectures to more types of applications, with an explicit expression of parallelism into the application. On the other hand, the recent development of the MPEG Reconfigurable Video Coding (RVC) standard allows the reconfiguration of the video decoders. RVC is a flexible standard compatible with MPEG developed codecs, making it the ideal tool to integrate into the new multimedia terminals to decode video sequences. With the new versions of the Open RVC-CAL Compiler (Orcc), a static mapping of the actors that implement the functionality of the application can be done once the application executable has been generated. This static mapping must be done for each of the different cores available on the working platform. It has been chosen an embedded system with a processor with two ARMv7 cores. This platform allows us to obtain the desired tests, get as much improvement results from the execution on a single core, and contrast both with a PC-based multiprocessor system. Las posibilidades ofrecidas por el aumento de la velocidad de la frecuencia de reloj de sistemas de un solo procesador están siendo agotadas. Las nuevas arquitecturas multiprocesador proporcionan una vía de desarrollo alternativa en este sentido. El diseño y optimización de aplicaciones de descodificación de video que se ejecuten sobre las nuevas arquitecturas permiten un mejor aprovechamiento y favorecen la obtención de mayores rendimientos. Hoy en día muchos de los dispositivos comerciales que se están lanzando al mercado están integrados por sistemas embebidos, que recientemente están basados en arquitecturas multinúcleo. El manejo de las tareas de ejecución sobre este tipo de arquitecturas no es una tarea trivial, y una buena planificación de los actores que implementan las funcionalidades puede proporcionar importantes mejoras en términos de eficiencia en el uso de la capacidad de los procesadores y, por ende, del consumo de energía. Por otro lado, el reciente desarrollo del estándar de Codificación de Video Reconfigurable (RVC), permite la reconfiguración de los descodificadores de video. RVC es un estándar flexible y compatible con anteriores codecs desarrollados por MPEG. Esto hace de RVC el estándar ideal para ser incorporado en los nuevos terminales multimedia que se están comercializando. Con el desarrollo de las nuevas versiones del compilador específico para el desarrollo de lenguaje RVC-CAL (Orcc), en el que se basa MPEG RVC, el mapeo estático, para entornos basados en multiprocesador, de los actores que integran un descodificador es posible. Se ha elegido un sistema embebido con un procesador con dos núcleos ARMv7. Esta plataforma nos permitirá llevar a cabo las pruebas de verificación y contraste de los conceptos estudiados en este trabajo, en el sentido del desarrollo de descodificadores de video basados en MPEG RVC y del estudio de la planificación y mapeo estático de los mismos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information about the computational cost of programs is potentially useful for a variety of purposes, including selecting among different algorithms, guiding program transformations, in granularity control and mapping decisions in parallelizing compilers, and query optimization in deductive databases. Cost analysis of logic programs is complicated by nondeterminism: on the one hand, procedures can return múltiple Solutions, making it necessary to estímate the number of solutions in order to give nontrivial upper bound cost estimates; on the other hand, the possibility of failure has to be taken into account while estimating lower bounds. Here we discuss techniques to address these problems to some extent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time series are proficiently converted into graphs via the horizontal visibility (HV) algorithm, which prompts interest in its capability for capturing the nature of different classes of series in a network context. We have recently shown [B. Luque et al., PLoS ONE 6, 9 (2011)] that dynamical systems can be studied from a novel perspective via the use of this method. Specifically, the period-doubling and band-splitting attractor cascades that characterize unimodal maps transform into families of graphs that turn out to be independent of map nonlinearity or other particulars. Here, we provide an in depth description of the HV treatment of the Feigenbaum scenario, together with analytical derivations that relate to the degree distributions, mean distances, clustering coefficients, etc., associated to the bifurcation cascades and their accumulation points. We describe how the resultant families of graphs can be framed into a renormalization group scheme in which fixed-point graphs reveal their scaling properties. These fixed points are then re-derived from an entropy optimization process defined for the graph sets, confirming a suggested connection between renormalization group and entropy optimization. Finally, we provide analytical and numerical results for the graph entropy and show that it emulates the Lyapunov exponent of the map independently of its sign.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy consumption in data centers is nowadays a critical objective because of its dramatic environmental and economic impact. Over the last years, several approaches have been proposed to tackle the energy/cost optimization problem, but most of them have failed on providing an analytical model to target both the static and dynamic optimization domains for complex heterogeneous data centers. This paper proposes and solves an optimization problem for the energy-driven configuration of a heterogeneous data center. It also advances in the proposition of a new mechanism for task allocation and distribution of workload. The combination of both approaches outperforms previous published results in the field of energy minimization in heterogeneous data centers and scopes a promising area of research.