968 resultados para Aggregate ichthyofauna
Resumo:
Ichthyofauna of the coastal «10 m depth) habitat of the South Atlantic Bight were investigated between Cape Fear, North Carolina, and the St. John's River, Florida. Trawl collections from four nonconsecutive seasons in the period July 1980 to December 1982 indicated that the fish community is dominated by the family Sciaenidae, particularly juvenile forms. Spot (Leiostomus xanthurus) and Atlantic croaker (Micropogonias undulatus) were the two most abundant species and dominated catches during all seasons. Atlantic menhaden (Brevoortin tyrannus) was also very abundant, but only seasonally (winter and spring) dominant in the catches. Elasmobranch fIShes, especially rajiforms and carcharinids, contributed to much of the biomass of fishes collected. Total fish abundance was greatest in winter and lowest in summer and was influenced by the seasonality of Atlantic menhaden and Atlantic croaker in the catches. Biomass was highest in spring and lowest in summer, and was influenced by biomass of spot. Fish density ranged from 321 individuals and 12.2 kg per hectare to 746 individuals and 25.2 kg per hectare. Most species ranged widely throughout the bight, and showed some evidence of seasonal migration. Species assemblages were dominated by ubiquitous year-round residents of the coastal waters of the bight. Diversity (H') was highest in summer, and appeared influenced by the evenness of distribution of individuals among species. (PDF file contains 56 pages.)
Resumo:
This resource can be particularly helpful to students taking the Intermediate Macroeconomics course, which corresponds to the second year of the current Degree in Economics at the University of the Basque Country UPV/ EHU. The precise content of this resource is a collection of eight chapters of multiple-choice questions. For each question the user is asked to guess which the correct answer is. Finally, the tool will return all the correct answers for the whole test, thereby allowing the user to check the validity of his/her answers. A remarkable feature of the tool is that it has been edited in three versions, for the three languages (Spanish, Basque and English) in which the subject is taught at the UPV/EHU.
Resumo:
The San Francisco Bay Conservation and Development Commission (BCDC), in continued partnership with the San Francisco Bay Long Term Management Strategies (LTMS) Agencies, is undertaking the development of a Regional Sediment Management Plan for the San Francisco Bay estuary and its watershed (estuary). Regional sediment management (RSM) is the integrated management of littoral, estuarine, and riverine sediments to achieve balanced and sustainable solutions to sediment related needs. Regional sediment management recognizes sediment as a resource. Sediment processes are important components of coastal and riverine systems that are integral to environmental and economic vitality. It relies on the context of the sediment system and forecasting the long-range effects of management actions when making local project decisions. In the San Francisco Bay estuary, the sediment system includes the Sacramento and San Joaquin delta, the bay, its local tributaries and the near shore coastal littoral cell. Sediment flows from the top of the watershed, much like water, to the coast, passing through rivers, marshes, and embayments on its way to the ocean. Like water, sediment is vital to these habitats and their inhabitants, providing nutrients and the building material for the habitat itself. When sediment erodes excessively or is impounded behind structures, the sediment system becomes imbalanced, and rivers become clogged or conversely, shorelines, wetlands and subtidal habitats erode. The sediment system continues to change in response both to natural processes and human activities such as climate change and shoreline development. Human activities that influence the sediment system include flood protection programs, watershed management, navigational dredging, aggregate mining, shoreline development, terrestrial, riverine, wetland, and subtidal habitat restoration, and beach nourishment. As observed by recent scientific analysis, the San Francisco Bay estuary system is changing from one that was sediment rich to one that is erosional. Such changes, in conjunction with increasing sea level rise due to climate change, require that the estuary sediment and sediment transport system be managed as a single unit. To better manage the system, its components, and human uses of the system, additional research and knowledge of the system is needed. Fortunately, new sediment science and modeling tools provide opportunities for a vastly improved understanding of the sediment system, predictive capabilities and analysis of potential individual and cumulative impacts of projects. As science informs management decisions, human activities and management strategies may need to be modified to protect and provide for existing and future infrastructure and ecosystem needs. (PDF contains 3 pages)
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
Resumo:
Microbial sulfur cycling communities were investigated in two methane-rich ecosystems, terrestrial mud volcanoes (TMVs) and marine methane seeps, in order to investigate niches and processes that would likely be central to the functioning of these crucial ecosystems. Terrestrial mud volcanoes represent geochemically diverse habitats with varying sulfur sources and yet sulfur-cycling in these environments remains largely unexplored. Here we characterized the sulfur-metabolizing microorganisms and activity in 4 TMVs in Azerbaijan, supporting the presence of active sulfur-oxidizing and sulfate-reducing guilds in all 4 TMVs across a range of physiochemical conditions, with diversity of these guilds being unique to each TMV. We also found evidence for the anaerobic oxidation of methane coupled to sulfate reduction, a process which we explored further in the more tractable marine methane seeps. Diverse associations between methanotrophic archaea (ANME) and sulfate-reducing bacterial groups (SRB) often co-occur in marine methane seeps, however the ecophysiology of these different symbiotic associations has not been examined. Using a combination of molecular, geochemical and fluorescence in situ hybridization coupled to nano-scale secondary ion mass spectrometry (FISH-NanoSIMS) analyses of in situ seep sediments and methane-amended sediment incubations from diverse locations, we show that the unexplained diversity in SRB associated with ANME cells can be at least partially explained by preferential nitrate utilization by one particular partner, the seepDBB. This discovery reveals that nitrate is likely an important factor in community structuring and diversity in marine methane seep ecosystems. The thesis concludes with a study of the dynamics between ANME and their associated SRB partners. We inhibited sulfate reduction and followed the metabolic processes of the community as well as the effect of ANME/SRB aggregate composition and growth on a cellular level by tracking 15N substrate incorporation into biomass using FISH-NanoSIMS. We revealed that while sulfate-reducing bacteria gradually disappeared over time in incubations with an SRB inhibitor, the ANME archaea persisted in the form of ANME-only aggregates, which are capable of little to no growth when sulfate reduction is inhibited. These data suggest ANME are not able to synthesize new proteins when sulfate reduction is inhibited.
Resumo:
A unique chloroplast Signal Recognition Particle (SRP) in green plants is primarily dedicated to the post-translational targeting of light harvesting chlorophyll-a/b binding (LHC) proteins. Our study of the thermodynamics and kinetics of the GTPases of the system demonstrates that GTPase complex assembly and activation are highly coupled in the chloroplast GTPases, suggesting they may forego the GTPase activation step as a key regulatory point. This reflects adaptations of the chloroplast SRP to the delivery of their unique substrate protein. Devotion to one highly hydrophobic family of proteins also may have allowed the chloroplast SRP system to evolve an efficient chaperone in the cpSRP43 subunit. To understand the mechanism of disaggregation, we showed that LHC proteins form micellar, disc-shaped aggregates that present a recognition motif (L18) on the aggregate surface. Further molecular genetic and structure-activity analyses reveal that the action of cpSRP43 can be dissected into two steps: (i) initial recognition of L18 on the aggregate surface; and (ii) aggregate remodeling, during which highly adaptable binding interactions of cpSRP43 with hydrophobic transmembrane domains of the substrate protein compete with the packing interactions within the aggregate. We also tested the adaptability of cpSRP43 for alternative substrates, specifically in attempts to improve membrane protein expression and inhibition of amyloid beta fibrillization. These preliminary results attest to cpSRP43’s potential as a molecular chaperone and provides the impetus for further engineering endeavors to address problems that stem from protein aggregation.
Resumo:
Since 1991, the aggregate biomass of fish stocks inhabiting the West Greenland shelf stagnates at the lowest level. The latest survey results of cruise no. 152 conducted by FRV 'Walther Herwig III' do not indicate any improvements in state of the stocks, although no fishing effort was recently directed towards groundfish. The cod stock showed again a record low and is presently dominated by recruits of the year classes 1991 and 1993. Both year classes are considered to be weak and the cod stock is beyond the 'minimum biologically acceptable level'. Consequently, an increase in stock abundance is not expected either in short or long term. Other ecologically or economically important fish species, American plaice, redfish, wolffish and starry skate, were also found to have minimum stock abundances. By-catch estimates of juvenile groundfish taken by the shrimp fishery, operating at traditional grounds of cod and redfish fisheries, are indispensible. Analysis of climatological data from Nuuk/West Greenland indicates that climate during the past fourty years was characterized by two decades of anomalous warm conditions, and cooling which dominates the dimate since 1969. Anomalous cold events were encountered during 1983, 1984 and during 1992, 1993. Similar to the air temperature anomalies, autumn temperatures of the ocean surface layer indicate cold and warm periods during the past thirty years. In contrast to the colder than normal atmospheric conditions during the early nineties, however, the ocean conditions indicate intermediate warming.
Resumo:
For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.
To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.
I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.
I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.
With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.
Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.
Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.
While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.
Resumo:
A comprehensive study was made of the flocculation of dispersed E. coli bacterial cells by the cationic polymer polyethyleneimine (PEI). The three objectives of this study were to determine the primary mechanism involved in the flocculation of a colloid with an oppositely charged polymer, to determine quantitative correlations between four commonly-used measurements of the extent of flocculation, and to record the effect of varying selected system parameters on the degree of flocculation. The quantitative relationships derived for the four measurements of the extent of flocculation should be of direct assistance to the sanitary engineer in evaluating the effectiveness of specific coagulation processes.
A review of prior statistical mechanical treatments of absorbed polymer configuration revealed that at low degrees of surface site coverage, an oppositely- charged polymer molecule is strongly adsorbed to the colloidal surface, with only short loops or end sequences extending into the solution phase. Even for high molecular weight PEI species, these extensions from the surface are theorized to be less than 50 Å in length. Although the radii of gyration of the five PEI species investigated were found to be large enough to form interparticle bridges, the low surface site coverage at optimum flocculation doses indicates that the predominant mechanism of flocculation is adsorption coagulation.
The effectiveness of the high-molecular weight PEI species 1n producing rapid flocculation at small doses is attributed to the formation of a charge mosaic on the oppositely-charged E. coli surfaces. The large adsorbed PEI molecules not only neutralize the surface charge at the adsorption sites, but also cause charge reversal with excess cationic segments. The alignment of these positive surface patches with negative patches on approaching cells results in strong electrostatic attraction in addition to a reduction of the double-layer interaction energies. The comparative ineffectiveness of low-molecular weight PEI species in producing E. coli flocculation is caused by the size of the individual molecules, which is insufficient to both neutralize and reverse the negative E.coli surface charge. Consequently, coagulation produced by low molecular weight species is attributed solely to the reduction of double-layer interaction energies via adsorption.
Electrophoretic mobility experiments supported the above conclusions, since only the high-molecular weight species were able to reverse the mobility of the E. coli cells. In addition, electron microscope examination of the seam of agglutination between E. coli cells flocculation by PEI revealed tightly- bound cells, with intercellular separation distances of less than 100-200 Å in most instances. This intercellular separation is partially due to cell shrinkage in preparation of the electron micrographs.
The extent of flocculation was measured as a function of PEl molecular weight, PEl dose, and the intensity of reactor chamber mixing. Neither the intensity of mixing, within the common treatment practice limits, nor the time of mixing for up to four hours appeared to play any significant role in either the size or number of E.coli aggregates formed. The extent of flocculation was highly molecular weight dependent: the high-molecular-weight PEl species produce the larger aggregates, the greater turbidity reductions, and the higher filtration flow rates. The PEl dose required for optimum flocculation decreased as the species molecular weight increased. At large doses of high-molecular-weight species, redispersion of the macroflocs occurred, caused by excess adsorption of cationic molecules. The excess adsorption reversed the surface charge on the E.coli cells, as recorded by electrophoretic mobility measurements.
Successful quantitative comparisons were made between changes in suspension turbidity with flocculation and corresponding changes in aggregate size distribution. E. coli aggregates were treated as coalesced spheres, with Mie scattering coefficients determined for spheres in the anomalous diffraction regime. Good quantitative comparisons were also found to exist between the reduction in refiltration time and the reduction of the total colloid surface area caused by flocculation. As with turbidity measurements, a coalesced sphere model was used since the equivalent spherical volume is the only information available from the Coulter particle counter. However, the coalesced sphere model was not applicable to electrophoretic mobility measurements. The aggregates produced at each PEl dose moved at approximately the same vlocity, almost independently of particle size.
PEl was found to be an effective flocculant of E. coli cells at weight ratios of 1 mg PEl: 100 mg E. coli. While PEl itself is toxic to E.coli at these levels, similar cationic polymers could be effectively applied to water and wastewater treatment facilities to enhance sedimentation and filtration characteristics.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
We report a method for the selective introduction of fluorescent Ag nanoclusters in glass. Extinction and photoluminescence spectra show that a fraction of the Ag atoms are generated through femtosecond laser induced multiphoton reduction and then aggregate to form Ag nanoclusters after heat treatment. Red luminescence from the irradiated region is observed under blue or green laser excitation. The fluorescence can be attributed to interband transitions within Ag nanoclusters. This method provides a novel route to fabricate fluorescent nanomaterials in 3D transparent materials. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
47 p.
Resumo:
40 p.
Resumo:
Real-time demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation.
In this thesis, we propose a real-time distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the average-case performance. Finally, we evaluate the algorithm via trace-based simulations.
Resumo:
Apart from a couple of early papers in the 1600s, the development of freshwater biology as a science in Mexico began in the last century. Taxonomic studies were made especially on algae, aquatic insects, crustaceans, annelid worms and aquatic plants. The great impetus acquired by limnology in Europe and America in the first half of the 20th Century stimulated foreign researchers to come and work in Mexico. During this period the Instituto de Biologia, belonging to the Universidad Nacional Autonoma de Mexico, was created in 1930. The Institute had a section of Hydrobiology that contributed to the limnological characterization of Mexican lakes and ponds. In 1962, the Instituto Nacional de Investigaciones Biologico-Pesqueras was created to bring together the work of several institutes working on the native ichthyofauna, the restocking of reservoirs, and aquaculture.