945 resultados para Linear Multi-step Formulae
Resumo:
En este proyecto se desarrolla un sistema electrónico para variar la geometría de un motor de un monoplaza que participa en la competición Fórmula SAE. Fórmula SAE es una competición de diseño de monoplazas para estudiantes, organizado por “Society of Automotive Enginners” (SAE). Este concurso busca la innovación tecnológica de la automoción, así como que estudiantes participen en un trabajo real, en el cual el objetivo es obtener resultados competitivos cumpliendo con una serie de requisitos. La variación de la geometría de un motor en un vehículo permite mejorar el rendimiento del monoplaza consiguiendo elevar el par de potencia del motor. Cualquier mejora en del vehículo en un ámbito de competición puede resultar determinante en el desenlace de la misma. El objetivo del proyecto es realizar esta variación mediante el control de la longitud de los tubos de admisión de aire o “runners” del motor de combustión, empleando un motor lineal paso a paso. A partir de la información obtenida por sensores de revoluciones del motor de combustión y la posición del acelerador se debe controlar la distancia de dichos tubos. Integrando este sistema en el bus CAN del vehículo para que comparta la información medida al resto de módulos. Por todo esto se realiza un estudio aclarando los aspectos generales del objetivo del trabajo, para la comprensión del proyecto a realizar, las posibilidades de realización y adquisición de conocimientos para un mejor desarrollo. Se presenta una solución basada en el control del motor lineal paso a paso mediante el microcontrolador PIC32MX795F512-L. Dispositivo del fabricante Microchip con una arquitectura de 32 bits. Este dispone de un módulo CAN integrado y distintos periféricos que se emplean en la medición de los sensores y actuación sobre el motor paso a paso empleando el driver de Texas Instruments DRV8805. Entonces el trabajo se realiza en dos líneas, una parte software de programación del control del sistema, empleando el software de Microchip MPLABX IDE y otra parte hardware de diseño de una PCB y circuitos acondicionadores para la conexión del microcontrolador, con los sensores, driver, motor paso a paso y bus CAN. El software empleado para la realización de la PCB es Orcad9.2/Layout. Para la evaluación de las medidas obtenidas por los sensores y la comprobación del bus CAN se emplea el kit de desarrollo de Microchip, MCP2515 CAN Bus Monitor Demo Board, que permite ver la información en el bus CAN e introducir tramas al mismo. ABSTRACT. This project develops an electronic system to vary the geometry of a car engine which runs the Formula SAE competition. Formula SAE is a design car competition for students, organized by "Society of Automotive Engineers" (SAE). This competition seeks technological innovation in the automotive industry and brings in students to participate in a real job, in which the objective is to obtain competitive results in compliance with certain requirements. Varying engine’s geometry in a vehicle improves car’s performance raising engine output torque. Any improvement in the vehicle in a competition field can be decisive in the outcome of it. The goal of the project is the variation by controlling the length of the air intake pipe or "runners" in a combustion engine, using a linear motor step. For these, uses the information gathered by speed sensors from the combustion engine and by the throttle position to control the distance of these tubes. This system is integrated in the vehicle CAN bus to share the information with the other modules. For all this is made a study to clarify the general aspects of the project in order to understand the activities developed inside the project, the different options available and also, to acquire knowledge for a better development of the project. The solution is based on linear stepper motor control by the microcontroller PIC32MX795F512-L. Device from manufacturer Microchip with a 32-bit architecture. This module has an integrated CAN various peripherals that are used in measuring the performance of the sensors and drives the stepper motor using Texas Instruments DRV8805 driver. Then the work is done in two lines, first, control programming software system using software MPLABX Microchip IDE and, second, hardware design of a PCB and conditioning circuits for connecting the microcontroller, with sensors, driver stepper motor and CAN bus. The software used to carry out the PCB is Orcad9.2/Layout. For the evaluation of the measurements obtained by the sensors and CAN bus checking is used Microchip development kit, MCP2515 CAN Bus Monitor Demo Board, that allows you to see the information on the CAN bus and enter new frames in the bus.
Resumo:
We demonstrate generating complete and playable card games using evolutionary algorithms. Card games are represented in a previously devised card game description language, a context-free grammar. The syntax of this language allows us to use grammar-guided genetic programming. Candidate card games are evaluated through a cascading evaluation function, a multi-step process where games with undesired properties are progressively weeded out. Three representa- tive examples of generated games are analysed. We observed that these games are reasonably balanced and have skill ele- ments, they are not yet entertaining for human players. The particular shortcomings of the examples are discussed in re- gard to the generative process to be able to generate quality games
Resumo:
This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.
Resumo:
Molecular analysis of invasive breast cancer and its precursors has furthered our understanding of breast cancer progression. In the past few years, new multi-step pathways of breast cancer progression have been delineated through genotypic-phenotypic correlations. Nuclear grade, more than any other pathological feature, is strongly associated with the number and pattern of molecular genetic abnormalities in breast cancer cells. Thus, there are two distinct major pathways to the evolution of low- and high-grade invasive carcinomas: whilst the former consistently show oestrogen receptor (ER) and progesterone receptor (PgR) positivity and 16q loss, the latter are usually ER/PgR-negative and show Her-2 over-expression/amplification and complex karyotypes. The boundaries between the evolutionary pathways of well-differentiated/low-grade ductal and lobular carcinomas have been blurred, with changes in E-cadherin expression being one of the few distinguishing features between the two. In addition, lesions long thought to be precursors of breast carcinomas, such as hyperplasia of usual type, are currently considered mere risk indicators, whilst columnar cell lesions are now implicated as non-obligate precursors of atypical ductal hyperplasia (ADH) and well-differentiated ductal carcinoma in situ (DCIS). However, only through the combination of comprehensive morphological analysis and cutting-edge molecular tools can this knowledge be translated into clinical practice and patient management. Copyright (C) 2005 Pathological Society of Great Britain and Ireland. Published by John Wiley Sons, Ltd.
Resumo:
A novel one pot process has been developed for the preparation of PbS nanocrystals in the conjugated polymer poly 2-methoxy,5-(2 ethyl-hexyloxy-p-phenylenevinylene) (MEH-PPV). Current techniques for making such composite materials rely upon synthesizing the nanocrystals and conducting polymer separately, and subsequently mixing them. This multi-step technique has two serious drawbacks: templating surfactant must be removed before mixing, and co-solvent incompatibility causes aggregation. In our method, we eliminate the need for an initial surfactant by using the conducting polymer to terminate and template nanocrystal growth. Additionally, the final product is soluble in a single solvent. We present materials analysis which shows PbS nanocrystals can be grown directly in a conducting polymer, the resulting composite is highly ordered and nanocrystal size can be controlled.
Resumo:
Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.
Resumo:
Algae are a new potential biomass for energy production but there is limited information on their pyrolysis and kinetics. The main aim of this thesis is to investigate the pyrolytic behaviour and kinetics of Chlorella vulgaris, a green microalga. Under pyrolysis conditions, these microalgae show their comparable capabilities to terrestrial biomass for energy and chemicals production. Also, the evidence from a preliminary pyrolysis by the intermediate pilot-scale reactor supports the applicability of these microalgae in the existing pyrolysis reactor. Thermal decomposition of Chlorella vulgaris occurs in a wide range of temperature (200-550°C) with multi-step reactions. To evaluate the kinetic parameters of their pyrolysis process, two approaches which are isothermal and non-isothermal experiments are applied in this work. New developed Pyrolysis-Mass Spectrometry (Py-MS) technique has the potential for isothermal measurements with a short run time and small sample size requirement. The equipment and procedure are assessed by the kinetic evaluation of thermal decomposition of polyethylene and lignocellulosic derived materials (cellulose, hemicellulose, and lignin). In the case of non-isothermal experiment, Thermogravimetry- Mass Spectrometry (TG-MS) technique is used in this work. Evolved gas analysis provides the information on the evolution of volatiles and these data lead to a multi-component model. Triplet kinetic values (apparent activation energy, pre-exponential factor, and apparent reaction order) from isothermal experiment are 57 (kJ/mol), 5.32 (logA, min-1), 1.21-1.45; 9 (kJ/mol), 1.75 (logA, min-1), 1.45 and 40 (kJ/mol), 3.88 (logA, min-1), 1.45- 1.15 for low, middle and high temperature region, respectively. The kinetic parameters from non-isothermal experiment are varied depending on the different fractions in algal biomass when the range of apparent activation energies are 73-207 (kJ/mol); pre-exponential factor are 5-16 (logA, min-1); and apparent reaction orders are 1.32–2.00. The kinetic procedures reported in this thesis are able to be applied to other kinds of biomass and algae for future works.
Resumo:
Biomass-To-Liquid (BTL) is one of the most promising low carbon processes available to support the expanding transportation sector. This multi-step process produces hydrocarbon fuels from biomass, the so-called “second generation biofuels” that, unlike first generation biofuels, have the ability to make use of a wider range of biomass feedstock than just plant oils and sugar/starch components. A BTL process based on gasification has yet to be commercialized. This work focuses on the techno-economic feasibility of nine BTL plants. The scope was limited to hydrocarbon products as these can be readily incorporated and integrated into conventional markets and supply chains. The evaluated BTL systems were based on pressurised oxygen gasification of wood biomass or bio-oil and they were characterised by different fuel synthesis processes including: Fischer-Tropsch synthesis, the Methanol to Gasoline (MTG) process and the Topsoe Integrated Gasoline (TIGAS) synthesis. This was the first time that these three fuel synthesis technologies were compared in a single, consistent evaluation. The selected process concepts were modelled using the process simulation software IPSEpro to determine mass balances, energy balances and product distributions. For each BTL concept, a cost model was developed in MS Excel to estimate capital, operating and production costs. An uncertainty analysis based on the Monte Carlo statistical method, was also carried out to examine how the uncertainty in the input parameters of the cost model could affect the output (i.e. production cost) of the model. This was the first time that an uncertainty analysis was included in a published techno-economic assessment study of BTL systems. It was found that bio-oil gasification cannot currently compete with solid biomass gasification due to the lower efficiencies and higher costs associated with the additional thermal conversion step of fast pyrolysis. Fischer-Tropsch synthesis was the most promising fuel synthesis technology for commercial production of liquid hydrocarbon fuels since it achieved higher efficiencies and lower costs than TIGAS and MTG. None of the BTL systems were competitive with conventional fossil fuel plants. However, if government tax take was reduced by approximately 33% or a subsidy of £55/t dry biomass was available, transport biofuels could be competitive with conventional fuels. Large scale biofuel production may be possible in the long term through subsidies, fuels price rises and legislation.
Resumo:
A continuous multi-step synthesis of 1,2-diphenylethane was performed sequentially in a structured compact reactor. This process involved a Heck C-C coupling reaction followed by the addition of hydrogen to perform reduction of the intermediate obtained in the first step. Both of the reactions were catalysed by microspherical carbon-supported Pd catalysts. Due to the integration of the micro-heat exchanger, the static mixer and the mesoscale packed-bed reaction channel, the compact reactor was proven to be an intensified tool for promoting the reactions. In comparison with the batch reactor, this flow process in the compact reactor was more efficient as: (i) the reaction time was significantly reduced (ca. 7 min versus several hours), (ii) no additional ligands were used and (iii) the reaction was run at lower operational pressure and temperature. Pd leached in the Heck reaction step was shown to be effectively recovered in the following hydrogenation reaction section and the catalytic activity of the system can be mostly retained by reverse flow operation. © 2009 Elsevier Inc. All rights reserved.
Resumo:
Cleavage by the proteasome is responsible for generating the C terminus of T-cell epitopes. Modeling the process of proteasome cleavage as part of a multi-step algorithm for T-cell epitope prediction will reduce the number of non-binders and increase the overall accuracy of the predictive algorithm. Quantitative matrix-based models for prediction of the proteasome cleavage sites in a protein were developed using a training set of 489 naturally processed T-cell epitopes (nonamer peptides) associated with HLA-A and HLA-B molecules. The models were validated using an external test set of 227 T-cell epitopes. The performance of the models was good, identifying 76% of the C-termini correctly. The best model of proteasome cleavage was incorporated as the first step in a three-step algorithm for T-cell epitope prediction, where subsequent steps predicted TAP affinity and MHC binding using previously derived models.
Resumo:
Background - The main processing pathway for MHC class I ligands involves degradation of proteins by the proteasome, followed by transport of products by the transporter associated with antigen processing (TAP) to the endoplasmic reticulum (ER), where peptides are bound by MHC class I molecules, and then presented on the cell surface by MHCs. The whole process is modeled here using an integrated approach, which we call EpiJen. EpiJen is based on quantitative matrices, derived by the additive method, and applied successively to select epitopes. EpiJen is available free online. Results - To identify epitopes, a source protein is passed through four steps: proteasome cleavage, TAP transport, MHC binding and epitope selection. At each stage, different proportions of non-epitopes are eliminated. The final set of peptides represents no more than 5% of the whole protein sequence and will contain 85% of the true epitopes, as indicated by external validation. Compared to other integrated methods (NetCTL, WAPP and SMM), EpiJen performs best, predicting 61 of the 99 HIV epitopes used in this study. Conclusion - EpiJen is a reliable multi-step algorithm for T cell epitope prediction, which belongs to the next generation of in silico T cell epitope identification methods. These methods aim to reduce subsequent experimental work by improving the success rate of epitope prediction.
Resumo:
The nonlinear inverse synthesis (NIS) method, in which information is encoded directly onto the continuous part of the nonlinear signal spectrum, has been proposed recently as a promising digital signal processing technique for combating fiber nonlinearity impairments. However, because the NIS method is based on the integrability property of the lossless nonlinear Schrödinger equation, the original approach can only be applied directly to optical links with ideal distributed Raman amplification. In this paper, we propose and assess a modified scheme of the NIS method, which can be used effectively in standard optical links with lumped amplifiers, such as, erbium-doped fiber amplifiers (EDFAs). The proposed scheme takes into account the average effect of the fiber loss to obtain an integrable model (lossless path-averaged model) to which the NIS technique is applicable. We found that the error between lossless pathaveraged and lossy models increases linearly with transmission distance and input power (measured in dB). We numerically demonstrate the feasibility of the proposed NIS scheme in a burst mode with orthogonal frequency division multiplexing (OFDM) transmission scheme with advanced modulation formats (e.g., QPSK, 16QAM, and 64QAM), showing a performance improvement up to 3.5 dB; these results are comparable to those achievable with multi-step per span digital backpropagation.
Resumo:
In this paper, we investigate the design of few-mode fibers (FMFs) guiding 2 to 12 linearly polarized (LP) modes with low differential mode delay (DMD) over the C-band, suitable for long-haul transmission. Two different types of refractive index profile have been considered: a graded-core with a cladding trench (GCCT) profile and a multi-step-index (MSI) profile. The profiles parameters are optimized in order to achieve: the lowest possible DMD and macro-bend losses (MBL) lower than the ITU-T standard recommendation. The optimization results show that the MSI profiles present lower DMD than the minimum achieved with a GCCT profile. Moreover, it is shown that the optimum DMD and the MBL scale with the number of modes for both profiles. The optimum DMD obtained for 12 LP modes is lower than 3 ps/km using a GCCT profile and lower than 2.5 ps/km using a MSI profile. The optimization results reveal that the most preponderant parameter of the GCCT profile is the refractive index relative difference at the core center, Δnco. Reducing Δn co, the DMD is reduced at the expense of increasing the MBL. Regarding the MSI profiles, it is shown that 64 steps are required to obtain a DMD improvement considering 12 LP modes. Finally, the impact of the fabrication margins on the optimum DMD is analyzed. The probability of having a manufactured FMF with 12 LP modes and DMD lower than 12 ps/km is approximately 68% using a GCCT profile and 16% using a MSI profile. © 2013 IEEE.
Resumo:
Rapidly rising world populations have sparked growing concerns over global food production to meet this increasing demand. Figures released by The World Bank suggest that a 50 % increase in worldwide cereal production is required by 2030. Primary amines are important intermediates in the synthesis of a wide variety of fine chemicals utilised within the agrochemical industry, and hence new 'greener' routes to their low cost manufacture from sustainable resources would permit significantly enhanced crop yields. Early synthetic pathways to primary amines employed stoichiometric (and often toxic) reagents via multi-step protocols, resulting in a large number of by-products and correspondingly high Environmental factors of 50-100 (compared with 1-5 for typical bulk chemicals syntheses). Alternative catalytic routes to primary amines have proven fruitful, however new issues relating to selectivity and deactivation have slowed commercialisation. The potential of heterogeneous catalysts for nitrile hydrogenation to amines has been demonstrated in a simplified reaction framework under benign conditions, but further work is required to improve the atom economy and energy efficiency through developing fundamental insight into nature of the active species and origin of on-stream deactivation. Supported palladium nanoparticles have been investigated for the hydrogenation of crotononitrile to butylamine (Figure 1) under favourable conditions, and the impact of reaction temperature, hydrogen pressure, support and loading upon activity and selectivity to C=C versus CºN activation assessed.
Resumo:
Ingestion of arsenic from contaminated water is a serious problem and affects the health of more than 100 million people worldwide. Traditional water purification technologies are generally not effective or cost prohibitive for the removal of arsenic to acceptable levels (≤10 ppb). Current multi-step arsenic removal processes involve oxidation, precipitation and/or adsorption. Advanced Oxidation Technologies (AOTs) may be attractive alternatives to existing treatments. The reactions of inorganic and organic arsenic species with reactive oxygen species were studied to develop a fundamental mechanistic understanding of these reactions, which is critical in identifying an effective and economical technology for treatment of arsenic contaminated water. ^ Detailed studies on the conversion of arsenite in aqueous media by ultrasonic irradiation and TiO2 photocatalytic oxidation (PCO) were conducted, focusing on the roles of hydroxyl radical and superoxide anion radical formed during the irradiation. ·OH plays the key role, while O2 -· has little or no role in the conversion of arsenite during ultrasonic irradiation. The reaction of O2-· does not contribute in the rapid conversion of As(III) when compared to the reaction of As(III) with ·OH radical during TiO2 PCO. Monomethylarsonic acid (MMA) and dimethylarsinic acid (DMA) are readily degraded upon TiO2 PCO. DMA is oxidized to MMA as the intermediate and arsenate as the final product. For dilute solutions, TiO2 also may be applicable as an adsorbent for direct removal of arsenic species, namely As(III), As(V), MMA and DMA, all of which are strongly adsorbed, thus eliminating the need for a multi-step treatment process. ^ Phenylarsonic acid (PA) was subjected to gamma radiolysis under hydroxyl radical generating conditions, which showed rapid degradation of PA. Product analysis and computational calculation both indicate the arsenate group is an ortho, para director. Our results indicate · OH radical mediated processes should be effective for the remediation of phenyl substituted arsonic acids. ^ While hydroxyl radical generating methods, specifically AOTs, appear to be promising methods for the treatment of a variety of arsenic compounds in aqueous media, pilot studies and careful economic analyses will be required to establish the feasibility of AOTs applications in the removal of arsenic. ^