967 resultados para two-round scheme
Resumo:
This study evaluated the yield, components of production and oil content of two castor bean cultivars through drip irrigation with different water depths. The research was conducted in 2009 in an Oxisol clay in the experimental field in Dourados, Mato Grosso do Sul State. The experimental design was randomized blocks in factorial scheme with five water depths (0, 25, 50, 100 and 150% of evapotranspiration for drip irrigation) in two castor bean cultivars (IAC 2028 and IAC 80) with four replications. The irrigation schedule was predetermined up to two irrigations per week except on rainy days. The increase of irrigation provided significant increase in most components of production and crop yield without changing the oil content of seeds. The application of higher water depth increased yield by 80% in relation to the treatment that received no supplemental irrigation.
Resumo:
This paper deals with the numerical solution of complex fluid dynamics problems using a new bounded high resolution upwind scheme (called SDPUS-C1 henceforth), for convection term discretization. The scheme is based on TVD and CBC stability criteria and is implemented in the context of the finite volume/difference methodologies, either into the CLAWPACK software package for compressible flows or in the Freeflow simulation system for incompressible viscous flows. The performance of the proposed upwind non-oscillatory scheme is demonstrated by solving two-dimensional compressible flow problems, such as shock wave propagation and two-dimensional/axisymmetric incompressible moving free surface flows. The numerical results demonstrate that this new cell-interface reconstruction technique works very well in several practical applications. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Solvent effects on the one- and two-photon absorption (IPA and 2PA) of disperse orange 3 (DO3) in dimethyl sulfoxide (DMSO) are studied using a discrete polarizable embedding (PE) response theory. The scheme comprises a quantum region containing the chromophore and an atomically granulated classical region for the solvent accounting for full interactions within and between the two regions. Either classical molecular dynamics (MD) or hybrid Car-Parrinello (CP) quantum/classical (QM/MM) molecular dynamics simulations are employed to describe the solvation of DO3 in DMSO, allowing for an analysis of the effect of the intermolecular short-range repulsion, long-range attraction, and electrostatic interactions on the conformational changes of the chromophore and also the effect of the solute-solvent polarization. PE linear response calculations are performed to verify the character, solvatochromic shift, and overlap of the two lowest energy transitions responsible for the linear absorption spectrum of DO3 in DMSO in the visible spectral region. Results of the PE linear and quadratic response calculations, performed using uncorrelated solute-solvent configurations sampled from either the classical or hybrid CP QM/MM MD simulations, are used to estimate the width of the line shape function of the two electronic lowest energy excited states, which allow a prediction of the 2PA cross-sections without the use of empirical parameters. Appropriate exchange-correlation functionals have been employed in order to describe the charge-transfer process following the electronic transitions of the chromophore in solution.
Resumo:
We use the star count model of Ortiz & Lépine to perform an unprecedented exploration of the most important Galactic parameters comparing the predicted counts with the Two Micron All Sky Survey observed star counts in the J, H, and KS bands for a grid of positions covering the whole sky. The comparison is made using a grid of lines of sight given by the HEALPix pixelization scheme. The resulting best-fit values for the parameters are: 2120 ± 200 pc for the radial scale length and 205 ± 40 pc for the scale height of the thin disk, with a central hole of 2070$_{-800}^{+2000}$ pc for the same disk, 3050 ± 500 pc for the radial scale length and 640 ± 70 pc for the scale height of the thick disk, 400 ± 100 pc for the central dimension of the spheroid, 0.0082 ± 0.0030 for the spheroid to disk density ratio, and 0.57 ± 0.05 for the oblate spheroid parameter.
Resumo:
We use the star count model of Ortiz & L´epine to perform an unprecedented exploration of the most important Galactic parameters comparing the predicted counts with the Two Micron All Sky Survey observed star counts in the J, H, and KS bands for a grid of positions covering the whole sky. The comparison is made using a grid of lines of sight given by the HEALPix pixelization scheme. The resulting best-fit values for the parameters are: 2120 ± 200 pc for the radial scale length and 205 ± 40 pc for the scale height of the thin disk, with a central hole of 2070+2000 −800 pc for the same disk, 3050 ± 500 pc for the radial scale length and 640 ± 70 pc for the scale height of the thick disk, 400 ± 100 pc for the central dimension of the spheroid, 0.0082 ± 0.0030 for the spheroid to disk density ratio, and 0.57 ± 0.05 for the oblate spheroid parameter.
Resumo:
We have studied the possibility of affecting the entanglement measure of 2-qubit system consisting of two photons with different fi xed frequencies but with two arbitrary linear polarizations, moving in the same direction, by the help of an applied external magnetic field. The interaction between the magnetic fi eld and the photons in our model is achieved through intermediate electrons that interact with both the photons and the magnetic fi eld. The possibility of exact theoretical analysis of this scheme is based on known exact solutions that describe the interaction of an electron subjected to an external magnetic fi eld (or a medium of electrons not interacting with each other) with a quantized field of two photons. We adapt these exact solutions to the case under consideration. Using explicit wave functions for the resulting electromagnetic fi eld, we calculate the entanglement measure of the photon beam as a function of the applied magnetic field and parameters of the electron medium.
Resumo:
This research seeks to provide an explanation for variations of “politics” of preference formation in international trade negotiations. Building on the ‘policy determines politics’ argument, I hypothesize the existence of a causal relationship between issue-characteristics and their variations with politics dynamics and their variations. More specifically, this study seeks to integrate into a single analytical framework two dimensions along which variations in the “politics of preference formation” can be organized: configurations of power relationships among the relevant actors in the structures within which they interact as well as the logic and the motivations of the actors involved in the policy making process. To do so, I first construct a four-cell typology of ‘politics of preference formation’ and, then, I proceed by specifying that the type of state-society configurations as well as the type of actors’ motivations in the “politics of preference formation” depend, respectively, on the degree to which a policy issue is perceived as politically salient and on the extent to which the distributional implications of such an issue can be calculated by the relevant stakeholders in the policy making process. The empirical yardstick against which the validity of the theoretical argument proposed is tested is drawn from evidence concerning the European Union’s negotiating strategy in four negotiating areas in the context of the so-called WTO’s Doha Development Round of multilateral trade negotiations: agriculture, competition, environment and technical assistance and capacity building.
Resumo:
Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).
A new double laser pulse pumping scheme for transient collisionally excited plasma soft X-ray lasers
Resumo:
Within this thesis a new double laser pulse pumping scheme for plasma-based, transient collisionally excited soft x-ray lasers (SXRL) was developed, characterized and utilized for applications. SXRL operations from ~50 up to ~200 electron volt were demonstrated applying this concept. As a central technical tool, a special Mach-Zehnder interferometer in the chirped pulse amplification (CPA) laser front-end was developed for the generation of fully controllable double-pulses to optimally pump SXRLs.rnThis Mach-Zehnder device is fully controllable and enables the creation of two CPA pulses of different pulse duration and variable energy balance with an adjustable time delay. Besides the SXRL pumping, the double-pulse configuration was applied to determine the B-integral in the CPA laser system by amplifying short pulse replica in the system, followed by an analysis in the time domain. The measurement of B-integral values in the 0.1 to 1.5 radian range, only limited by the reachable laser parameters, proved to be a promising tool to characterize nonlinear effects in the CPA laser systems.rnContributing to the issue of SXRL pumping, the double-pulse was configured to optimally produce the gain medium of the SXRL amplification. The focusing geometry of the two collinear pulses under the same grazing incidence angle on the target, significantly improved the generation of the active plasma medium. On one hand the effect was induced by the intrinsically guaranteed exact overlap of the two pulses on the target, and on the other hand by the grazing incidence pre-pulse plasma generation, which allows for a SXRL operation at higher electron densities, enabling higher gain in longer wavelength SXRLs and higher efficiency at shorter wavelength SXRLs. The observation of gain enhancement was confirmed by plasma hydrodynamic simulations.rnThe first introduction of double short-pulse single-beam grazing incidence pumping for SXRL pumping below 20 nanometer at the laser facility PHELIX in Darmstadt (Germany), resulted in a reliable operation of a nickel-like palladium SXRL at 14.7 nanometer with a pump energy threshold strongly reduced to less than 500 millijoule. With the adaptation of the concept, namely double-pulse single-beam grazing incidence pumping (DGRIP) and the transfer of this technology to the laser facility LASERIX in Palaiseau (France), improved efficiency and stability of table-top high-repetition soft x-ray lasers in the wavelength region below 20 nanometer was demonstrated. With a total pump laser energy below 1 joule the target, 2 mircojoule of nickel-like molybdenum soft x-ray laser emission at 18.9 nanometer was obtained at 10 hertz repetition rate, proving the attractiveness for high average power operation. An easy and rapid alignment procedure fulfilled the requirements for a sophisticated installation, and the highly stable output satisfied the need for a reliable strong SXRL source. The qualities of the DGRIP scheme were confirmed in an irradiation operation on user samples with over 50.000 shots corresponding to a deposited energy of ~ 50 millijoule.rnThe generation of double-pulses with high energies up to ~120 joule enabled the transfer to shorter wavelength SXRL operation at the laser facility PHELIX. The application of DGRIP proved to be a simple and efficient method for the generation of soft x-ray lasers below 10 nanometer. Nickel-like samarium soft x-ray lasing at 7.3 nanometer was achieved at a low total pump energy threshold of 36 joule, which confirmed the suitability of the applied pumping scheme. A reliable and stable SXRL operation was demonstrated, due to the single-beam pumping geometry despite the large optical apertures. The soft x-ray lasing of nickel-like samarium was an important milestone for the feasibility of applying the pumping scheme also for higher pumping pulse energies, which are necessary to obtain soft x-ray laser wavelengths in the water window. The reduction of the total pump energy below 40 joule for 7.3 nanometer short wavelength lasing now fulfilled the requirement for the installation at the high-repetition rate operation laser facility LASERIX.rn
Development of a biorefinery scheme for the valorization of olive mill wastewaters and grape pomaces
Resumo:
In the Mediterranean area, olive mill wastewater (OMW) and grape pomace (GP) are among the major agro-industrial wastes produced. These two wastes have a high organic load and high phytotoxicity. Thus, their disposal in the environment can lead to negative effects. Second-generation biorefineries are dedicated to the valorization of biowaste by the production of goods from such residual biomasses. This approach can combine bioremediation approaches to the generation of noble molecules, biomaterials and energy. The main aim of this thesis work was to study the anaerobic digestion of OMW and GP under different operational conditions to produce volatile fatti acids (VFAs) (first stage aim) and CH4 (second stage aim). To this end, a packed-bed biofilm reactor (PBBR) was set up to perform the anaerobic acidogenic digestion of the liquid dephenolized stream of OMW (OMWdeph). In parallel, the solid stream of OMW (OMWsolid), previously separated in order to allow the solid phase extraction of polyphenols, was addressed to anaerobic methanogenic digestion to obtain CH4. The latter experiment was performed in 100ml Pyrex bottles which were maintained at different temperatures (55-45-37°C). Together with previous experiments, the anaerobic acidogenic digestion of fermented GP (GPfreshacid) and dephenolized and fermented GP (GPdephacid) was performed in 100ml Pyrex bottles to estimate the concentration of VFAs achievable from each aforementioned GPs. Finally, the same matrices of GP and not pre-treated GP (GPfresh) were digested under anaerobic methanogenic condition to produce CH4. Anaerobic acidogenic and methanogenic digestion processes of GPs lasted about 33 days. Instead, the anaerobic acidogenic and methanogenic digestion process of OMWs lasted about 121 and 60 days, respectively. Each experiment was periodically monitored by analysing volume and composition of produced biogas and VFA concentration. Results showed that VFAs were produced in higher concentrations in GP compared to OMWdeph. The overall concentration of VFAs from GPfreshacid was approximately 39.5 gCOD L-1, 29 gCOD L-1 from GPdephacid, and 8.7 gCOD L-1 from OMWdeph. Concerning the CH4 production, the OMWsolid reached a high biochemical methane potential (BMP) at a thermophilic temperature (55°) than at mesophlic ones (37-45°C). The value reached was about 358.7 mlCH4 gSVsub-1. In contrast, GPfresh got a high BMP but at a mesophilic temperature. The BMP was about 207.3 mlCH4 gSVsub-1, followed by GPfreshacid with about 192.6 mlCH4 gSVsub-1 and lastly GPdephacid with about 102.2 mlCH4 gSVsub-1. In summary, based on the gathered results, GP seems to be a better carbon source for acidogenic and methanogenic microrganism compared to OMW, because higher amount of VFAs and CH4 were produced in AD of GP than OMW. In addition to these products, polyphenols were extracted by means of a solid phase extraction (SPE) procedure by another research group, and VFAs were utilised for biopolymers production, in particular polyhydroxyalkanoates (PHAs), by the same research group in which I was involved.
Resumo:
The present work studies a km-scale data assimilation scheme based on a LETKF developed for the COSMO model. The aim is to evaluate the impact of the assimilation of two different types of data: temperature, humidity, pressure and wind data from conventional networks (SYNOP, TEMP, AIREP reports) and 3d reflectivity from radar volume. A 3-hourly continuous assimilation cycle has been implemented over an Italian domain, based on a 20 member ensemble, with boundary conditions provided from ECMWF ENS. Three different experiments have been run for evaluating the performance of the assimilation on one week in October 2014 during which Genova flood and Parma flood took place: a control run of the data assimilation cycle with assimilation of data from conventional networks only, a second run in which the SPPT scheme is activated into the COSMO model, a third run in which also reflectivity volumes from meteorological radar are assimilated. Objective evaluation of the experiments has been carried out both on case studies and on the entire week: check of the analysis increments, computing the Desroziers statistics for SYNOP, TEMP, AIREP and RADAR, over the Italian domain, verification of the analyses against data not assimilated (temperature at the lowest model level objectively verified against SYNOP data), and objective verification of the deterministic forecasts initialised with the KENDA analyses for each of the three experiments.
Resumo:
The WHO scheme for prothrombin time (PT) standardization has been limited in application, because of its difficulties in implementation, particularly the need for mandatory manual PT testing and for local provision of thromboplastin international reference preparations (IRP).
Resumo:
We report on clinicopathological findings in two cases of rosette-forming glioneuronal tumor of the fourth ventricle (RGNT) occurring in females aged 16 years (Case 1) and 30 years (Case 2). Symptoms included vertigo, nausea, cerebellar ataxia, as well as headaches, and had been present for 4-months and 1 week, respectively. Magnetic resonance imaging (MRI) indicated a cerebellar-based tumor of 1.8cm (Case 1) and 5cm (Case 2) diameter each, bulging into the fourth ventricle. Case 2 involved a cyst-mural-nodule configuration. In both instances, the solid component appeared isointense on T(1) sequences, hyperintense in the T(2) mode, and enhanced moderately. Gross total resection was achieved via suboccipital craniotomy. However, functional recovery was disappointing in Case 1. On microscopy, both tumors comprised an admixture of low-grade astrocytoma interspersed with circular aggregates of synaptophysin-expressing round cells harboring oligodendrocyte-like nuclei. The astrocytic moiety in Case 1 was nondescript, and overtly pilocytic in Case 2. The architecture of neuronal elements variously consisted of neurocytic rosettes, of pseudorosettes centered on a capillary core, as well as of concentric ribbons along irregular lumina. Gangliocytic maturation, especially "floating neurons", or a corresponding immunoreactivity for neurofilament protein was absent. Neither of these populations exhibited atypia, mitotic activity, or a significant labeling for MIB-1. Cerebellar parenchyma included in the surgical specimen did not reveal any preexisting malformative anomaly. Despite sharing some overlapping histologic traits with dysembryoplastic neuroepithelial tumor (DNT), the presentation of RGNT with respect to both patient age and location is consistent enough for this lesion to be singled out as an autonomous entity.
Resumo:
This technical report discusses the application of Lattice Boltzmann Method (LBM) in the fluid flow simulation through porous filter-wall of disordered media. The diesel particulate filter (DPF) is an example of disordered media. DPF is developed as a cutting edge technology to reduce harmful particulate matter in the engine exhaust. Porous filter-wall of DPF traps these soot particles in the after-treatment of the exhaust gas. To examine the phenomena inside the DPF, researchers are looking forward to use the Lattice Boltzmann Method as a promising alternative simulation tool. The lattice Boltzmann method is comparatively a newer numerical scheme and can be used to simulate fluid flow for single-component single-phase, single-component multi-phase. It is also an excellent method for modelling flow through disordered media. The current work focuses on a single-phase fluid flow simulation inside the porous micro-structure using LBM. Firstly, the theory concerning the development of LBM is discussed. LBM evolution is always related to Lattice gas Cellular Automata (LGCA), but it is also shown that this method is a special discretized form of the continuous Boltzmann equation. Since all the simulations are conducted in two-dimensions, the equations developed are in reference with D2Q9 (two-dimensional 9-velocity) model. The artificially created porous micro-structure is used in this study. The flow simulations are conducted by considering air and CO2 gas as fluids. The numerical model used in this study is explained with a flowchart and the coding steps. The numerical code is constructed in MATLAB. Different types of boundary conditions and their importance is discussed separately. Also the equations specific to boundary conditions are derived. The pressure and velocity contours over the porous domain are studied and recorded. The results are compared with the published work. The permeability values obtained in this study can be fitted to the relation proposed by Nabovati [8], and the results are in excellent agreement within porosity range of 0.4 to 0.8.
Resumo:
The U.S. Renewable Fuel Standard mandates that by 2022, 36 billion gallons of renewable fuels must be produced on a yearly basis. Ethanol production is capped at 15 billion gallons, meaning 21 billion gallons must come from different alternative fuel sources. A viable alternative to reach the remainder of this mandate is iso-butanol. Unlike ethanol, iso-butanol does not phase separate when mixed with water, meaning it can be transported using traditional pipeline methods. Iso-butanol also has a lower oxygen content by mass, meaning it can displace more petroleum while maintaining the same oxygen concentration in the fuel blend. This research focused on studying the effects of low level alcohol fuels on marine engine emissions to assess the possibility of using iso-butanol as a replacement for ethanol. Three marine engines were used in this study, representing a wide range of what is currently in service in the United States. Two four-stroke engine and one two-stroke engine powered boats were tested in the tributaries of the Chesapeake Bay, near Annapolis, Maryland over the course of two rounds of weeklong testing in May and September. The engines were tested using a standard test cycle and emissions were sampled using constant volume sampling techniques. Specific emissions for two-stroke and four-stroke engines were compared to the baseline indolene tests. Because of the nature of the field testing, limited engine parameters were recorded. Therefore, the engine parameters analyzed aside from emissions were the operating relative air-to-fuel ratio and engine speed. Emissions trends from the baseline test to each alcohol fuel for the four-stroke engines were consistent, when analyzing a single round of testing. The same trends were not consistent when comparing separate rounds because of uncontrolled weather conditions and because the four-stroke engines operate without fuel control feedback during full load conditions. Emissions trends from the baseline test to each alcohol fuel for the two-stroke engine were consistent for all rounds of testing. This is due to the fact the engine operates open-loop, and does not provide fueling compensation when fuel composition changes. Changes in emissions with respect to the baseline for iso-butanol were consistent with changes for ethanol. It was determined iso-butanol would make a viable replacement for ethanol.