896 resultados para laboratory instruments
Resumo:
Climate change is an important environmental problem and one whose economic implications are many and varied. This paper starts with the presumption that mitigation of greenhouse gases is a necessary policy that has to be designed in a cost effective way. It is well known that market instruments are the best option for cost effectiveness. But the discussion regarding which of the various market instruments should be used, how they may interact and what combinations of policies should be implemented is still open and very lively. In this paper we propose a combination of instruments: the marketable emission permits already in place in Europe for major economic sectors and a CO(2) tax for economic sectors not included in the emissions permit scheme. The study uses an applied general equilibrium model for the Spanish economy to compute the results obtained with the new mix of instruments proposed. As the combination of the market for emission permits and the CO(2) tax admits different possibilities that depend on how the mitigation is distributed among the economic sectors, we concentrate on four possibilities: cost-effective, equalitarian, proportional to emissions, and proportional to output distributions. Other alternatives to the CO(2) tax are also analysed (tax on energy, on oil and on electricity). Our findings suggest that careful, well designed policies are needed as any deviation imposes significant additional costs that increase more than proportionally to the level of emissions reduction targeted by the EU.
Resumo:
During April 8th-10th, 2008, the Aliance for Coastal Technology (ACT) partner institutions, University of Alaska Fairbanks (UAF), Alaska SeaLife Center (ASLC), and the Oil Spill Recovery Institute (OSRI) hosted a workshop entitled: "Hydrocarbon sensors for oil spill prevention and response" in Seward, Alaska. The main focus was to bring together 29 workshop participants-representing workshop managers, scientists, and technology developers - together to discuss current and future hydrocarbon in-situ, laboratory, and remote sensors as they apply to oil spill prevention and response. [PDF contains 28 pages] Hydrocarbons and their derivatives still remain one of the most important energy sources in the world. To effectively manage these energy sources, proper protocol must be implemented to ensure prevention and responses to oil spills, as there are significant economic and environmental costs when oil spills occur. Hydrocarbon sensors provide the means to detect and monitor oil spills before, during, and after they occur. Capitalizing on the properties of oil, developers have designed in-situ, laboratory, and remote sensors that absorb or reflect the electromagnetic energy at different spectral bands. Workshop participants identified current hydrocarbon sensors (in-situ, laboratory, and remote sensors) and their overall performance. To achieve the most comprehensive understanding of oil spills, multiple sensors will be needed to gather oil spill extent, location, movement, thickness, condition, and classification. No single hydrocarbon sensor has the capability to collect all this information. Participants, therefore, suggested the development of means to combine sensor equipment to effectively and rapidly establish a spill response. As the exploration of oil continues at polar latitudes, sensor equipment must be developed to withstand harsh arctic climates, be able to detect oil under ice, and reduce the need for ground teams because ice extent is far too large of an area to cover. Participants also recognized the need for ground teams because ice extent is far too large of an area to cover. Participants also recognized the need for the U.S. to adopt a multi-agency cooperation for oil spill response, as the majority of issues surounding oil spill response focuses not on the hydrocarbon sensors but on an effective contingency plan adopted by all agencies. It is recommended that the U.S. could model contingency planning based on other nations such as Germany and Norway. Workshop participants were asked to make recommendations at the conclusion of the workshop and are summarized below without prioritization: *Outreach materials must be delivered to funding sources and Congressional delegates regarding the importance of oil spill prevention and response and the development of proper sensors to achieve effective response. *Develop protocols for training resource managers as new sensors become available. *Develop or adopt standard instrument specifications and testing protocols to assist manufacturers in further developing new sensor technology. *As oil exploration continues at polar latitudes, more research and development should be allocated to develop a suite of instruments that are applicable to oil detection under ice.
Resumo:
This report describes FORTH software written for several instruments used in the NASA-sponsored project to design and build Marine Optical Buoy System (MOBS) and in the NOAA-sponsored project "EOS MODIS Execution: Oceanographic Profiling, Data Acquisition and Management for the Marine Optical Buoy System·'. In the NOAA project MLML and NOAA personnel will participate in quarterly cruises at the MOBS Hawaiian site to validate performance of SeaWiFS and will participate in several extended "process" cruises to provide wide geographic surface truthing investigations similar to those lead by Dennis Clark (NOAA) following the "launch of CZCS in 1979. In the NASA project we are designing and building MOBS, a high resolution spectroradiometer that will operate autonomously in a buoy moored west of Lanai in the Hawaiian Islands. That instrument, the "Marine Optical System" (MOS), will transmit by cellular phone in near real time observations of upwelled radiance and downwelled irradiance from three depths. [PDF contains 90 pages]
Resumo:
At present both the wedge gauge and the ICES mesh gauge are used to examine the mesh opening of trawl cod-ends. A thorough investigation of both instruments by an EU project (MESH) has revealed a number of deficiencies. This has lead to the conclusions to not further recommend the use of both devices but to go for the development of a complete new measurement tool. This is presently progressing in a further EU project named OMEGA. Prototypes of the new device have been produced and a number of introductory tests have already been made. This contribution reports on further laboratory tests carried out at the Institute for Fishing technology and Fish Quality with special emphasis on netting yarn diameters below 4 mm. The results demonstrate further yarn size depending inaccuracies of the wedge gauge and support the rapid introduction of the newly developed measurement instrument.
Resumo:
The embryonic development in Clarias gariepinus was studied under laboratory conditions. The developmental stages of eggs starting from first cleavage were examined microscopically. Photomicroscope was used to take important stages of segmentation, blastulation, differentiation of embryo and hatching. The films of the photograph were developed and printed for each stage produced. The accurate timing and detailed description of each stage was done. The results show that the blastodisc (Polar cap) appeared about 35 minutes after fertilization and the first cleavage dividing the blastodisc into two blastomeres occurs 15 minutes after polar cap formation. Details of the developmental stages of embryos and the timing from one stage to the other were described. The larva shook off the shell and emerged completely from the egg case about 22 hours after fertilization at a water temperature of 25.1 degree C. The accurate determination of the time of initiation of first mitosis is of great importance in fish culture and breeding especially in the production of tetraploids
Resumo:
The determination of the energy levels and the probabilities of transition between them, by the formal analysis of observed electronic, vibrational, and rotational band structures, forms the direct goal of all investigations of molecular spectra, but the significance of such data lies in the possibility of relating them theoretically to more concrete properties of molecules and the radiation field. From the well developed electronic spectra of diatomic molecules, it has been possible, with the aid of the non-relativistic quantum mechanics, to obtain accurate moments of inertia, molecular potential functions, electronic structures, and detailed information concerning the coupling of spin and orbital angular monenta with the angular momentum of nuclear rotation. The silicon fluori1e molecule has been investigated in this laboratory, and is found to emit bands whose vibrational and rotational structures can be analyzed in this detailed fashion.
Like silicon fluoride, however, the great majority of diatomic molecules are formed only under the unusual conditions of electrical discharge, or in high temperature furnaces, so that although their spectra are of great theoretical interest, the chemist is eager to proceed to a study of polyatomic molecules, in the hope that their more practically interesting structures might also be determined with the accuracy and assurance which characterize the spectroscopic determinations of the constants of diatomic molecules. Some progress has been made in the determination of molecule potential functions from the vibrational term values deduced from Raman and infrared spectra, but in no case can the calculations be carried out with great generality, since the number of known term values is always small compared with the total number of potential constants in even so restricted a potential function as the simple quadratic type. For the determination of nuclear configurations and bond distances, however, a knowledge of the rotational terms is required. The spectra of about twelve of the simpler polyatomic molecules have been subjected to rotational analyses, and a number of bond distances are known with considerable accuracy, yet the number of molecules whose rotational fine structure has been resolved even with the most powerful instruments is small. Consequently, it was felt desirable to investigate the spectra of a number of other promising polyatomic molecules, with the purpose of carrying out complete rotational analyses of all resolvable bands, and ascertaining the value of the unresolved band envelopes in determining the structures of such molecules, in the cases in which resolution is no longer possible. Although many of the compounds investigated absorbed too feebly to be photographed under high dispersion with the present infrared sensitizations, the location and relative intensities of their bands, determined by low dispersion measurements, will be reported in the hope that these compounds may be reinvestigated in the future with improved techniques.
Resumo:
Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.
During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.
Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.
The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.
Resumo:
4 p.
Resumo:
25 p.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
[no abstract]