901 resultados para two-stage sequential procedure
Resumo:
A two stage approach to performing ab initio calculations on medium and large sized molecules is described. The first step is to perform SCF calculations on small molecules or molecular fragments using the OPIT Program. This employs a small basis set of spherical and p-type Gaussian functions. The Gaussian functions can be identified very closely with atomic cores, bond pairs, lone pairs, etc. The position and exponent of any of the Gaussian functions can be varied by OPIT to produce a small but fully optimised basis set. The second stage is the molecular fragments method. As an example of this, Gaussian exponents and distances are taken from an OPIT calculation on ethylene and used unchanged in a single SCF calculation on benzene. Approximate ab initio calculations of this type give much useful information and are often preferable to semi-empirical approaches, since the nature of the approximations involved is much better defined.
Resumo:
In many studies of the side-chain liquid crystalline polymers (SCLCPs) bearing azobenzene mesogens as pendant groups, obtaining the orientation of azobenzene mesogens at a macroscopic scale as well as its control is important, because it impacts many properties related to the cooperative motion characteristic of liquid crystals and the trans-cis photoisomerization of the azobenzene molecules. Various means can be used to align the mesogens in the polymers, including rubbed surface, mechanical stretching or shearing, and electric or magnetic field. In the case of azobenzene-containing SCLCPs, another method consists in using linearly polarized light (LPL) to induce orientation of azobenzene mesogens perpendicular to the polarization direction of the excitation light, and such photoinduced orientation has been the subject of numerous studies. In the first study realized in this thesis (Chapter 1), we carried out the first systematic investigation on the interplay of the mechanically and optically induced orientation of azobenzene mesogens as well as the effect of thermal annealing in a SCLCP and a diblock copolymer comprising two SCLCPs bearing azobenzene and biphenyl mesogens, respectively. Using a supporting-film approach previously developed by our group, a given polymer film can be first stretched in either the nematic or smectic phase to yield orientation of azobenzene mesogens either parallel or perpendicular to the strain direction, then exposed to unpolarized UV light to erase the mechanically induced orientation upon the trans–cis isomerization, followed by linearly polarized visible light for photoinduced reorientation as a result of the cis–trans backisomerization, and finally heated to different LC phases for thermal annealing. Using infrared dichroism to monitor the change in orientation degree, the results of this study have unveiled complex and different orientational behavior and coupling effects for the homopolymer of poly{6-[4-(4-methoxyphenylazo)phenoxy]hexyl methacrylate} (PAzMA) and the diblock copolymer of PAzMA-block- poly{6-[4-(4-cyanophenyl) phenoxy]hexyl methacrylate} (PAzMA-PBiPh). Most notably for the homopolymer, the stretching-induced orientation exerts no memory effect on the photoinduced reorientation, the direction of which is determined by the polarization of the visible light regardless of the mechanically induced orientation direction in the stretched film. Moreover, subsequent thermal annealing in the nematic phase leads to parallel orientation independently of the initial mechanically or photoinduced orientation direction. By contrast, the diblock copolymer displays a strong orientation memory effect. Regardless of the condition used, either for photoinduced reorientation or thermal annealing in the liquid crystalline phase, only the initial stretching-induced perpendicular orientation of azobenzene mesogens can be recovered. The reported findings provide new insight into the different orientation mechanisms, and help understand the important issue of orientation induction and control in azobenzene-containing SCLCPs. The second study presented in this thesis (Chapter 2) deals with supramolecular side-chain liquid crystalline polymers (S-SCLCPs), in which side-group mesogens are linked to the chain backbone through non-covalent interactions such as hydrogen bonding. Little is known about the mechanically induced orientation of mesogens in S-SCLCPs. In contrast to covalent SCLCPs, free-standing, solution-cast thin films of a S-SCLCP, built up with 4-(4’-heptylphenyl) azophenol (7PAP) H-bonded to poly(4-vinyl pyridine) (P4VP), display excellent stretchability. Taking advantage of this finding, we investigated the stretching-induced orientation and the viscoelastic behavior of this S-SCLCP, and the results revealed major differences between supramolecular and covalent SCLCPs. For covalent SCLCPs, the strong coupling between chain backbone and side-group mesogens means that the two constituents can mutually influence each other; the lack of chain entanglements is a manifestation of this coupling effect, which accounts for the difficulty in obtaining freestanding and mechanically stretchable films. Upon elongation of a covalent SCLCP film cast on a supporting film, the mechanical force acts on the coupled polymer backbone and mesogenic side groups, and the latter orients cooperatively and efficiently (high orientation degree), which, in turn, imposes an anisotropic conformation of the chain backbone (low orientation degree). In the case of the S-SCLCP of P4VP-7PAP, the coupling between the side-group mesogens and the chain backbone is much weakened owing to the dynamic dissociation/association of the H-bonds linking the two constituents. The consequence of this decoupling is readily observable from the viscoelastic behavior. The average molecular weight between entanglements is basically unchanged in both the smectic and isotropic phase, and is similar to non-liquid crystalline samples. As a result, the S-SCLCP can easily form freestanding and stretchable films. Furthermore, the stretching induced orientation behavior of P4VP-7PAP is totally different. Stretching in the smectic phase results in a very low degree of orientation of the side-group mesogens even at a large strain (500%), while the orientation of the main chain backbone develops steadily with increasing the strain, much the same way as amorphous polymers. The results imply that upon stretching, the mechanical force is mostly coupled to the polymer backbone and leads to its orientation, while the main chain orientation exerts little effect on orienting the H-bonded mesogenic side groups. This surprising finding is explained by the likelihood that during stretching in the smectic phase (at relatively higher temperatures) the dynamic dissociation of the H-bonds allow the side-group mesogens to be decoupled from the chain backbone and relax quickly. In the third project (Chapter 3), we investigated the shape memory properties of a S-SCLCP prepared by tethering two azobenzene mesogens, namely, 7PAP and 4-(4'-ethoxyphenyl) azophenol (2OPAP), to P4VP through H-bonding. The results revealed that, despite the dynamic nature of the linking H-bonds, the supramolecular SCLCP behaves similarly to covalent SCLCP by exhibiting a two-stage thermally triggered shape recovery process governed by both the glass transition and the LC-isotropic phase transition. The ability for the supramolecular SCLCP to store part of the strain energy above T[subscript g] in the LC phase enables the triple-shape memory property. Moreover, thanks to the azobenzene mesogens used, which can undergo trans-cis photoisomerization, exposure the supramolecular SCLCP to UV light can also trigger the shape recovery process, thus enabling the remote activation and the spatiotemporal control of the shape memory. By measuring the generated contractile force and its removal upon turning on and off the UV light, respectively, on an elongated film under constant strain, it seems that the optically triggered shape recovery stems from a combination of a photothermal effect and an effect of photoplasticization or of an order-disorder phase transition resulting from the trans-cis photoisomerization of azobenzene mesogens.
Resumo:
Centrifugal pumps are vastly used in many industrial applications. Knowledge of how these components behave in several circumstances is crucial for the development of more efficient and, therefore, less expensive pumping installations. The combination of multiple impellers, vaned diffusers and a volute might introduce several complex flow characteristics that largely deviate from regular inviscid pump flow theory. Computational Fluid Dynamics can be very helpful to extract information about which physical phenomena are involved in such flows. In this sense, this work performs a numerical study of the flow in a two-stage centrifugal pump (Imbil ITAP 65-330/2) with a vaned diffuser and a volute. The flow in the pump is modeled using the software Ansys CFX, by means of a multi-block, transient rotor-stator technique, with structured grids for all pump parts. The simulations were performed using water and a mixture of water and glycerin as work fluids. Several viscosities were considered, in a range between 87 and 720 cP. Comparisons between experimental data obtained by Amaral (2007) and numerical head curves showed a good agreement, with an average deviation of 6.8% for water. The behavior of velocity, pressure and turbulence kinetic energy fields was evaluated for several operational conditions. In general, the results obtained by this work achieved the proposed goals and are a significant contribution to the understanding of the flow studied.
Resumo:
An indirect genetic algorithm for the non-unicost set covering problem is presented. The algorithm is a two-stage meta-heuristic, which in the past was successfully applied to similar multiple-choice optimisation problems. The two stages of the algorithm are an ‘indirect’ genetic algorithm and a decoder routine. First, the solutions to the problem are encoded as permutations of the rows to be covered, which are subsequently ordered by the genetic algorithm. Fitness assignment is handled by the decoder, which transforms the permutations into actual solutions to the set covering problem. This is done by exploiting both problem structure and problem specific information. However, flexibility is retained by a self-adjusting element within the decoder, which allows adjustments to both the data and to stages within the search process. Computational results are presented.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Paula Frassinetti para obtenção de grau de Mestre em Educação Pré-Escolar e Ensino do 1.º Ciclo do Ensino Básico
Resumo:
This dissertation mainly focuses on coordinated pricing and inventory management problems, where the related background is provided in Chapter 1. Several periodic-review models are then discussed in Chapters 2,3,4 and 5, respectively. Chapter 2 analyzes a deterministic single-product model, where a price adjustment cost incurs if the current selling price is changed from the previous period. We develop exact algorithms for the problem under different conditions and find out that computation complexity varies significantly associated with the cost structure. %Moreover, our numerical study indicates that dynamic pricing strategies may outperform static pricing strategies even when price adjustment cost accounts for a significant portion of the total profit. Chapter 3 develops a single-product model in which demand of a period depends not only on the current selling price but also on past prices through the so-called reference price. Strongly polynomial time algorithms are designed for the case without no fixed ordering cost, and a heuristic is proposed for the general case together with an error bound estimation. Moreover, our illustrates through numerical studies that incorporating reference price effect into coordinated pricing and inventory models can have a significant impact on firms' profits. Chapter 4 discusses the stochastic version of the model in Chapter 3 when customers are loss averse. It extends the associated results developed in literature and proves that the reference price dependent base-stock policy is proved to be optimal under a certain conditions. Instead of dealing with specific problems, Chapter 5 establishes the preservation of supermodularity in a class of optimization problems. This property and its extensions include several existing results in the literature as special cases, and provide powerful tools as we illustrate their applications to several operations problems: the stochastic two-product model with cross-price effects, the two-stage inventory control model, and the self-financing model.
Enzymatic hydrolysis and fermentation of ultradispersed wood particles after ultrasonic pretreatment
Resumo:
Background: A study of the correlation between the particle size of lignocellulosic substrates and ultrasound pretreatment on the efficiency of further enzymatic hydrolysis and fermentation to ethanol. Results: Themaximumconcentrations of glucose and, to a lesser extent, di- and trisaccharideswere obtained in a series of experiments with 48-h enzymatic hydrolysis of pine rawmaterials ground at 380–400 rpm for 30min. The highest glucose yield was observed at the end of the hydrolysis with a cellulase dosage of 10 mg of protein (204 ± 21 units CMCase per g of sawdust). The greatest enzymatic hydrolysis efficiency was observed in a sample that combined two-stage grinding at 400 rpm with ultrasonic treatment for 5–10 min at a power of 10 W per kg of sawdust. The glucose yield in this case (35.5 g glucose l−1) increased twofold compared to ground substrate without further preparation. Conclusions: Using a mechanical two-stage grinding of lignocellulosic raw materials with ultrasonication increases the efficiency of subsequent enzymatic hydrolysis and fermentation.
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.
Resumo:
Many different photovoltaic technologies are being developed for large-scale solar energy conversion such as crystalline silicon solar cells, thin film solar cells based on a-Si:H, CIGS and CdTe. As the demand for photovoltaics rapidly increases, there is a pressing need for the identification of new visible light absorbing materials for thin-film solar cells. Nowadays there are a wide range of earth-abundant absorber materials that have been studied around the world by different research groups. The current thin film photovoltaic market is dominated by technologies based on the use of CdTe and CIGS, these solar cells have been made with laboratory efficiencies up to 19.6% and 20.8% respectively. However, the scarcity and high cost of In, Ga and Te can limit in the long-term the production in large scale of photovoltaic devices. On the other hand, quaternary CZTSSe which contain abundant and inexpensive elements like Cu, Zn, Sn, S and Se has been a potential candidate for PV technology having solar cell efficiency up to 12.6%, however, there are still some challenges that must be accomplished for this material. Therefore, it is evident the need to find the alternative inexpensive and earth abundant materials for thin film solar cells. One of these alternatives is copper antimony sulfide(CuSbS2) which contains abundant and non-toxic elements which has a direct optical band gap of 1.5 eV, the optimum value for an absorber material in solar cells, suggesting this material as one among the new photovoltaic materials. This thesis work focuses on the preparation and characterization of In6Se7, CuSbS2 and CuSb(S1-xSex)2 thin films for their application as absorber material in photovoltaic structures using two stage process by the combination of chemical bath deposition and thermal evaporation.
Resumo:
Nonpoint sources (NPS) pollution from agriculture is the leading source of water quality impairment in U.S. rivers and streams, and a major contributor to lakes, wetlands, estuaries and coastal waters (U.S. EPA 2016). Using data from a survey of farmers in Maryland, this dissertation examines the effects of a cost sharing policy designed to encourage adoption of conservation practices that reduce NPS pollution in the Chesapeake Bay watershed. This watershed is the site of the largest Total Maximum Daily Load (TMDL) implemented to date, making it an important setting in the U.S. for water quality policy. I study two main questions related to the reduction of NPS pollution from agriculture. First, I examine the issue of additionality of cost sharing payments by estimating the direct effect of cover crop cost sharing on the acres of cover crops, and the indirect effect of cover crop cost sharing on the acres of two other practices: conservation tillage and contour/strip cropping. A two-stage simultaneous equation approach is used to correct for voluntary self-selection into cost sharing programs and account for substitution effects among conservation practices. Quasi-random Halton sequences are employed to solve the system of equations for conservation practice acreage and to minimize the computational burden involved. By considering patterns of agronomic complementarity or substitution among conservation practices (Blum et al., 1997; USDA SARE, 2012), this analysis estimates water quality impacts of the crowding-in or crowding-out of private investment in conservation due to public incentive payments. Second, I connect the econometric behavioral results with model parameters from the EPA’s Chesapeake Bay Program to conduct a policy simulation on water quality effects. I expand the econometric model to also consider the potential loss of vegetative cover due to cropland incentive payments, or slippage (Lichtenberg and Smith-Ramirez, 2011). Econometric results are linked with the Chesapeake Bay Program watershed model to estimate the change in abatement levels and costs for nitrogen, phosphorus and sediment under various behavioral scenarios. Finally, I use inverse sampling weights to derive statewide abatement quantities and costs for each of these pollutants, comparing these with TMDL targets for agriculture in Maryland.
Resumo:
Matching theory and matching markets are a core component of modern economic theory and market design. This dissertation presents three original contributions to this area. The first essay constructs a matching mechanism in an incomplete information matching market in which the positive assortative match is the unique efficient and unique stable match. The mechanism asks each agent in the matching market to reveal her privately known type. Through its novel payment rule, truthful revelation forms an ex post Nash equilibrium in this setting. This mechanism works in one-, two- and many-sided matching markets, thus offering the first mechanism to unify these matching markets under a single mechanism design framework. The second essay confronts a problem of matching in an environment in which no efficient and incentive compatible matching mechanism exists due to matching externalities. I develop a two-stage matching game in which a contracting stage facilitates subsequent conditionally efficient and incentive compatible Vickrey auction stage. Infinite repetition of this two-stage matching game enforces the contract in every period. This mechanism produces inequitably distributed social improvement: parties to the contract receive all of the gains and then some. The final essay demonstrates the existence of prices which stably and efficiently partition a single set of agents into firms and workers, and match those two sets to each other. This pricing system extends Kelso and Crawford's general equilibrium results in a labor market matching model and links one- and two-sided matching markets as well.
Resumo:
El objetivo de este documento es obtener evidencia empírica acerca de la existencia de efectos asimétricos de la política monetaria sobre el nivel de actividad económica, con base en el comportamiento de la tasa de interés. Se observa un efecto asimétrico de la política monetaria cuando tasas de interés por encima de su nivel fundamental tienen un efecto sobre la actividad económica significativamente distinto del que tendría una tasa de interés por debajo de su nivel fundamental.La identificación de cambios en la tasa de interés que reflejan cambios de política se realiza por mínimos cuadrados en dos etapas. En la primera etapa, el nivel fundamental de la tasa de interés se estima con una regla de Taylor modificada y sus residuos son utilizados para identificar el estado de la política. La segunda etapa consiste en una regresión del producto real sobre una constante y los valores rezagados de los residuos positivos y negativos obtenidos en la primera etapa. La asimetría vendría determinada por la significancia estadística de los coeficientes individuales de los residuos positivos y negativos y de la diferencia entre estos.La evidencia empírica, para el periodo 1994:01-2002:11, sugiere la existencia de una asimetría débil de la política monetaria. Lo anterior debido a que aunque los incrementos y disminuciones en la tasa de interés afectan el nivel de producción significativamente, la diferencia del impacto no resulta significativa.AbstractThe objective of this paper is to obtain empirical evidence about the existence of asymmetric effects of monetary policy over economic activity, based on interest rate behavior. Monetary policy shows an asymmetric effect when an interest rate over their fundamental level have an impact on economic activity that is significantly different from that when interest rate are below its fundamental level.Changes in interest rate that reflect changes of policy are identified using two stage least squares. In the first stage, the fundamental level of the interest rate is estimated with a modified Taylor rule and residuals are used to identify the state of the policy. The second stage consists of a regression of the real output on a constant and lagged values of the positive and negative residuals obtained in the first stage. The asymmetry would come determined by the statistical significance of individual coefficients of positive and negative residuals and the difference between them.The empirical evidence, over the 1994:01-2002:11 period, suggests the existence of weak asymmetry of monetary policy. Although increases and reductions in interest rate affect the production level significantly, the difference of the impact is not significant.
Resumo:
Background and aims: A gluten-free diet is to date the only treatment available to celiac disease sufferers. However, systematic reviews indicate that, depending on the method of evaluation used, only 42% to 91% of patients adhere to the diet strictly. Transculturally adapted tools that evaluate adherence beyond simple self-informed questions or invasive analyses are, therefore, of importance. The aim is to obtain a Spanish transcultural adaption and validation of Leffler's Celiac Dietary Adherence Test. Methods: A two-stage observational transversal study: translation and back translation by four qualified translators followed by a validation stage in which the questionnaire was administered to 306 celiac disease patients aged between 12 and 72 years and resident in Aragon. Factorial structure, criteria validity and internal consistency were evaluated. Results: The Spanish version maintained the 7 items in a 3-factor structure. Feasibility was very high in all the questions answered and the floor and ceiling effects were very low (4.3% and 1%, respectively). The Spearman correlation with the self-efficacy and life quality scales and the self-informed question were statistically significant (p < 0.01). According to the questionnaire criteria, adherence was 72.3%. Conclusion: The Spanish version of the Celiac Dietary Adherence Test shows appropriate psychometric properties and is, therefore, suitable for studying adherence to a gluten-free diet in clinical and research environments.
Resumo:
The use of infrared burners in industrial applications has many advantages in terms of technical-operational, for example, uniformity in the heat supply in the form of radiation and convection, with greater control of emissions due to the passage of exhaust gases through a macro-porous ceramic bed. This paper presents an infrared burner commercial, which was adapted an experimental ejector, capable of promoting a mixture of liquefied petroleum gas (LPG) and glycerin. By varying the percentage of dual-fuel, it was evaluated the performance of the infrared burner by performing an energy balance and atmospheric emissions. It was introduced a temperature controller with thermocouple modulating two-stage (low heat / high heat), using solenoid valves for each fuel. The infrared burner has been tested and tests by varying the amount of glycerin inserted by a gravity feed system. The method of thermodynamic analysis to estimate the load was used an aluminum plate located at the exit of combustion gases and the distribution of temperatures measured by a data acquisition system which recorded real-time measurements of the thermocouples attached. The burner had a stable combustion at levels of 15, 20 and 25% of adding glycerin in mass ratio of LPG gas, increasing the supply of heat to the plate. According to data obtained showed that there was an improvement in the efficiency of the 1st Law of infrared burner with increasing addition of glycerin. The emission levels of greenhouse gases produced by combustion (CO, NOx, SO2 and HC) met the environmental limits set by resolution No. 382/2006 of CONAMA
Resumo:
La conciliación de medicamentos es la adecuada combinación de conocimientos y evidencias científicas de las reacciones, interacciones y necesidades de los pacientes, constituye en esencial el buen uso de los medicamentos. Objetivo general: Establecer la conciliación de medicamentos e identificar los tipos de discrepancias existentes al ingreso, durante la hospitalización y al alta en las pacientes del área de ginecología del Hospital Vicente Corral Moscoso. Cuenca, durante los meses noviembre – diciembre 2015. Metodología: Se diseñó un estudio descriptivo, con un población de 200 pacientes hospitalizadas en el área de ginecología del Hospital Vicente Corral Moscoso, durante 2 meses del 2015, recolectamos los datos mediante un formulario de dos etapas para la conciliación, a partir de las prescripciones de la historia clínica y entrevista a las pacientes, los que fueron ingresados en el software SPSS 15.0 para su tabulación, análisis, y presentación en tablas. Resultados: Se encontró 161 errores de conciliación y 42 discrepancias justificadas, en promedio 1,87discrepancias no justificadas por paciente. El error de conciliación más frecuente al ingreso corresponde a diferente dosis, vía y frecuencia de administración con un 84,6%, durante la hospitalización y al alta, correspondió a prescripciones incompletas con el 40% y 60,3% respectivamente. Conclusiones: La frecuencia con la que se realiza la conciliación de medicamentos en el Hospital Vicente Corral Moscoso fue del 15%. El 52% de pacientes están expuestos a riesgo por discordancias en las prescripciones, de ellos 43% son errores en la conciliación y un 9 % son discordancias justificadas