975 resultados para Nontechnical losses
Resumo:
Nonlinear propagation of fs laser pulses in liquids and the dynamic processes of filamentation such as self-focusing, intensity clamping, and evolution of white light production have been analyzed by using one- and two-photon fluorescence. The energy losses of laser pulses caused by multiphoton absorption and conical emission have been measured respectively by z-scan technique. Numerical simulations of fs laser propagation in water have been made to explain the evolution of white light production as well as the small-scale filaments in liquids we have observed by a nonlinear fluorescence technique. (c) 2005 Optical Society of America.
Resumo:
Amorphous metals that form fully glassy parts over a few millimeters in thickness are still relatively new materials. Their glassy structure gives them particularly high strengths, high yield strains, high hardness values, high resilience, and low damping losses, but this can also result in an extremely low tolerance to the presence of flaws in the material. Since this glassy structure lacks the ordered crystal structure, it also lacks the crystalline defect (dislocations) that provides the micromechanism of toughening and flaw insensitivity in conventional metals. Without a sufficient and reliable toughness that results in a large tolerance of damage in the material, metallic glasses will struggle to be adopted commercially. Here, we identify the origin of toughness in metallic glass as the competition between the intrinsic toughening mechanism of shear banding ahead of a crack and crack propagation by the cavitation of the liquid inside the shear bands. We present a detailed study over the first three chapters mainly focusing on the process of shear banding; its crucial role in giving rise to one of the most damage-tolerant materials known, its extreme sensitivity to the configurational state of a glass with moderate toughness, and how the configurational state can be changed with the addition of minor elements. The last chapter is a novel investigation into the cavitation barrier in glass-forming liquids, the competing process to shear banding. The combination of our results represents an increased understanding of the major influences on the fracture toughness of metallic glasses and thus provides a path for the improvement and development of tougher metallic glasses.
Resumo:
Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.
During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.
Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.
The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.
Resumo:
This is the first report and record of the determination whether mudfish Clarias were infected with the larvae of the nematode Eustrongylides. Also, documented the assessment of the socio-economic perceptions of three groups of fisher folks on the economy of infected mudfish Clarias fishing activities. Fifty-six (67.5%) of 83 mudfish Clarias caught by artisanal fisher folks were examined for the presence of the larvae of the nematode Eustrongylides. All the 8 sampled fishing localities in Bida floodplain of Nigeria had a mean intensity and abundance of at least 3 and 1 worm per fish per site, respectively. Two hundred and one (96.2%) of 209 worms recovered were from the musculatures at different depths resulting in undulations on the skin surfaces as grub-like presentations. The three groups of fisher folks assessed encountered economic losses from nematode infected mudfish Clarias which attract much debates or rejections during marketing due to its aesthetically displeasing appearance, faster deterioration, higher fragility in smoked form coupled with poorer taste compared to the wholesome ones. Infected female mudfish Clarias had higher worm burden than the males, for each fishing locality
Resumo:
This thesis presents an experimental investigation of the axisymmetric heat transfer from a small scale fire and resulting buoyant plume to a horizontal, unobstructed ceiling during the initial stages of development. A propane-air burner yielding a heat source strength between 1.0 kW and 1.6 kW was used to simulate the fire, and measurements proved that this heat source did satisfactorily represent a source of buoyancy only. The ceiling consisted of a 1/16" steel plate of 0.91 m. diameter, insulated on the upper side. The ceiling height was adjustable between 0.5 m and 0.91 m. Temperature measurements were carried out in the plume, ceiling jet, and on the ceiling.
Heat transfer data were obtained by using the transient method and applying corrections for the radial conduction along the ceiling and losses through the insulation material. The ceiling heat transfer coefficient was based on the adiabatic ceiling jet temperature (recovery temperature) reached after a long time. A parameter involving the source strength Q and ceiling height H was found to correlate measurements of this temperature and its radial variation. A similar parameter for estimating the ceiling heat transfer coefficient was confirmed by the experimental results.
This investigation therefore provides reasonable estimates for the heat transfer from a buoyant gas plume to a ceiling in the axisymmetric case, for the stagnation region where such heat transfer is a maximum and for the ceiling jet region (r/H ≤ 0.7). A comparison with data from experiments which involved larger heat sources indicates that the predicted scaling of temperatures and heat transfer rates for larger scale fires is adequate.
Resumo:
The German trawlfishery on cod in the German Bight causes discards. Their amount since 1983 has been estimated. The losses of millions of fish finally is the reason for losses of thousands of tonnes of valuable landings. The fishermen therefore should improve the selection rate of their nets.
Resumo:
4 p.
Resumo:
4 p.
Resumo:
Spontaneous emission into the lasing mode fundamentally limits laser linewidths. Reducing cavity losses provides two benefits to linewidth: (1) fewer excited carriers are needed to reach threshold, resulting in less phase-corrupting spontaneous emission into the laser mode, and (2) more photons are stored in the laser cavity, such that each individual spontaneous emission event disturbs the phase of the field less. Strong optical absorption in III-V materials causes high losses, preventing currently-available semiconductor lasers from achieving ultra-narrow linewidths. This absorption is a natural consequence of the compromise between efficient electrical and efficient optical performance in a semiconductor laser. Some of the III-V layers must be heavily doped in order to funnel excited carriers into the active region, which has the side effect of making the material strongly absorbing.
This thesis presents a new technique, called modal engineering, to remove modal energy from the lossy region and store it in an adjacent low-loss material, thereby reducing overall optical absorption. A quantum mechanical analysis of modal engineering shows that modal gain and spontaneous emission rate into the laser mode are both proportional to the normalized intensity of that mode at the active region. If optical absorption near the active region dominates the total losses of the laser cavity, shifting modal energy from the lossy region to the low-loss region will reduce modal gain, total loss, and the spontaneous emission rate into the mode by the same factor, so that linewidth decreases while the threshold inversion remains constant. The total spontaneous emission rate into all other modes is unchanged.
Modal engineering is demonstrated using the Si/III-V platform, in which light is generated in the III-V material and stored in the low-loss silicon material. The silicon is patterned as a high-Q resonator to minimize all sources of loss. Fabricated lasers employing modal engineering to concentrate light in silicon demonstrate linewidths at least 5 times smaller than lasers without modal engineering at the same pump level above threshold, while maintaining the same thresholds.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
47 p.
Resumo:
29 p.
Resumo:
23 p.
Resumo:
21 p.
Resumo:
35 p.