925 resultados para Large-group methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Executive Summary: The western National Coastal Assessment (NCA-West) program of EPA, in conjunction with the NOAA National Ocean Service (NOS), conducted an assessment of the status of ecological condition of soft sediment habitats and overlying waters along the western U.S. continental shelf, between the target depths of 30 and 120 m, during June 2003. NCA-West and NOAA/NOS partnered with the West Coast states (Washington (WA), Oregon (OR), and California (CA)), and the Southern California Coastal Water Research Project (SCCWRP) Bight ’03 program to conduct the survey. A total of 257 stations were sampled from Cape Flattery, WA to the Mexican border using standard methods and indicators applied in previous coastal NCA projects. A key study feature was the incorporation of a stratified-random sampling design with stations stratified by state and National Marine Sanctuary (NMS) status. Each of the three states was represented by at least 50 random stations. There also were a total of 84 random stations located within NOAA’s five NMSs along the West Coast including the Olympic Coast NMS (OCNMS), Cordell Bank NMS (CBNMS), Gulf of Farallones NMS (GFNMS), Monterey Bay NMS (MBNMS), and Channel Islands NMS (CINMS). Collection of flatfish via hook-and-line for fish-tissue contaminant analysis was successful at 50 EMAP/NCA-West stations. Through a collaboration developed with the FRAM Division of the Northwest Fisheries Science Center, fish from an additional 63 stations in the same region and depth range were also analyzed for fish-tissue contaminants. Bottom depth throughout the region ranged from 28 m to 125 m for most stations. Two slightly deeper stations from the Southern California Bight (SCB) (131, 134 m) were included in the data set. About 44% of the survey area had sediments composed of sands (< 20% silt-clay), about 47% was composed of intermediate muddy sands (20-80% silt-clay), and about 9% was composed of muds (> 80% silt-clay). The majority of the survey area (97%) had relatively low percent total organic carbon (TOC) levels of < 2%, while a small portion (< 1%) had high TOC levels (> 5%), in a range potentially harmful to benthic fauna. Salinity of surface waters for 92% of the survey area were > 31 psu, with most stations < 31 psu associated with the Columbia River plume. Bottom salinities ranged only between 31.6 and 34.4 psu. There was virtually no difference in mean bottom salinities among states or between NMS and non-NMS stations. Temperatures of surface water (range 8.5 -19.9 °C) and bottom water (range 5.8 -14.7 °C) averaged several degrees higher in CA in comparison to WA and OR. The Δσt index of watercolumn stratification indicated that about 31% of the survey area had strong vertical stratification of the water column. The index was greatest for waters off WA and lowest for CA waters. Only about 2.6 % of the survey area had surface dissolved oxygen (DO) concentrations ≤ 4.8 mg/L, and there were no values below the lower threshold (2.3 mg/L) considered harmful to the survival and growth of marine animals. Surface DO concentrations were higher in WA and OR waters than in CA, and higher in the OC NMS than in the CA sanctuaries. An estimated 94.3% of the area had bottom-water DO concentrations ≤ 4.8 mg/L and 6.6% had concentrations ≤ 2.3 mg/L. The high prevalence of DO from 2.3 to 4.8 mg/L (85% of survey area) is believed to be associated with the upwelling of naturally low DO water across the West Coast shelf. Mean TSS and transmissivity in surface waters (excluding OR due to sample problems) were slightly higher and lower, respectively, for stations in WA than for those in CA. There was little difference in mean TSS or transmissivity between NMS and non-NMS locations. Mean transmissivity in bottom waters, though higher in comparison to surface waters, showed little difference among geographic regions or between NMS and non-NMS locations. Concentrations of nitrate + nitrite, ammonium, total dissolved inorganic nitrogen (DIN) and orthophosphate (P) in surface waters tended to be highest in CA compared to WA and OR, and higher in the CA NMS stations compared to CA non-sanctuary stations. Measurements of silicate in surface waters were limited to WA and CA (exclusive of the SCB) and showed that concentrations were similar between the two states and approximately twice as high in CA sanctuaries compared to OCNMS or nonsanctuary locations in either state. The elevated nutrient concentrations observed at CA NMS stations are consistent with the presence of strong upwelling at these sites at the time of sampling. Approximately 93% of the area had DIN/P values ≤ 16, indicative of nitrogen limitation. Mean DIN/P ratios were similar among the three states, although the mean for the OCNMS was less than half that of the CA sanctuaries or nonsanctuary locations. Concentrations of chlorophyll a in surface waters ranged from 0 to 28 μg L-1, with 50% of the area having values < 3.9 μg L-1 and 10% having values > 14.5 μg L-1. The mean concentration of chlorophyll a for CA was less than half that of WA and OR locations, and concentrations were lowest in non-sanctuary sites in CA and highest at the OCNMS. Shelf sediments throughout the survey area were relatively uncontaminated with the exception of a group of stations within the SCB. Overall, about 99% of the total survey area was rated in good condition (<5 chemicals measured above corresponding effect range low (ERL) concentrations). Only the pesticides 4,4′-DDE and total DDT exceeded corresponding effect range-median (ERM) values, all at stations in CA near Los Angeles. Ten other contaminants including seven metals (As, Cd, Cr, Cu, Hg, Ag, Zn), 2-methylnaphthalene, low molecular weight PAHs, and total PCBs exceeded corresponding ERLs. The most prevalent in terms of area were chromium (31%), arsenic (8%), 2-methylnaphthalene (6%), cadmium (5%), and mercury (4%). The chromium contamination may be related to natural background sources common to the region. The 2-methylnaphthalene exceedances were conspicuously grouped around the CINMS. The mercury exceedances were all at non-sanctuary sites in CA, particularly in the Los Angeles area. Concentrations of cadmium in fish tissues exceeded the lower end of EPA’s non-cancer, human-health-risk range at nine of 50 EMAP/NCA-West and nine of 60 FRAM groundfish-survey stations, including a total of seven NMS stations in CA and two in the OCNMS. The human-health guidelines for all other contaminants were only exceeded for total PCBs at one station located in WA near the mouth of the Columbia River. Benthic species richness was relatively high in these offshore assemblages, ranging from 19 to 190 taxa per 0.1-m2 grab and averaging 79 taxa/grab. The high species richness was reflected over large areas of the shelf and was nearly three times greater than levels observed in estuarine samples along the West Coast (e.g NCA-West estuarine mean of 26 taxa/grab). Mean species richness was highest off CA (94 taxa/grab) and lower in OR and WA (55 and 56 taxa/grab, respectively). Mean species richness was very similar between sanctuary vs. non-sanctuary stations for both the CA and OR/WA regions. Mean diversity index H′ was highest in CA (5.36) and lowest in WA (4.27). There were no major differences in mean H′ between sanctuary vs. nonsanctuary stations for both the CA and OR/WA regions. A total of 1,482 taxa (1,108 to species) and 99,135 individuals were identified region-wide. Polychaetes, crustaceans and molluscs were the dominant taxa, both by percent abundance (59%, 17%, 12% respectively) and percent species (44%, 25%, 17%, respectively). There were no major differences in the percent composition of benthic communities among states or between NMSs and corresponding non-sanctuary sites. Densities averaged 3,788 m-2, about 30% of the average density for West Coast estuaries. Mean density of benthic fauna in the present offshore survey, averaged by state, was highest in CA (4,351 m-2) and lowest in OR (2,310 m-2). Mean densities were slightly higher at NMS stations vs. non-sanctuary stations for both the CA and OR/WA regions. The 10 most abundant taxa were the polychaetes Mediomastus spp., Magelona longicornis, Spiophanes berkeleyorum, Spiophanes bombyx, Spiophanes duplex, and Prionospio jubata; the bivalve Axinopsida serricata, the ophiuroid Amphiodia urtica, the decapod Pinnixa occidentalis, and the ostracod Euphilomedes carcharodonta. Mediomastus spp. and A. serricata were the two most abundant taxa overall. Although many of these taxa have broad geographic distributions throughout the region, the same species were not ranked among the 10 most abundant taxa consistently across states. The closest similarities among states were between OR and WA. At least half of the 10 most abundant taxa in NMSs were also dominant in corresponding nonsanctuary waters. Many of the abundant benthic species have wide latitudinal distributions along the West Coast shelf, with some species ranging from southern CA into the Gulf of Alaska or even the Aleutians. Of the 39 taxa on the list of 50 most abundant taxa that could be identified to species level, 85% have been reported at least once from estuaries of CA, OR, or WA exclusive of Puget Sound. Such broad latitudinal and estuarine distributions are suggestive of wide habitat tolerances. Thirteen (1.2%) of the 1,108 identified species are nonindigenous, with another 121 species classified as cryptogenic (of uncertain origin), and 208 species unclassified with respect to potential invasiveness. Despite uncertainties of classification, the number and densities of nonindigenous species appear to be much lower on the shelf than in the estuarine ecosystems of the Pacific Coast. Spionid polychaetes and the ampharetid polychaete Anobothrus gracilis were a major component of the nonindigenous species collected on the shelf. NOAA’s five NMSs along the West Coast of the U.S. appeared to be in good ecological condition, based on the measured indicators, with no evidence of major anthropogenic impacts or unusual environmental qualities compared to nearby nonsanctuary waters. Benthic communities in sanctuaries resembled those in corresponding non-sanctuary waters, with similarly high levels of species richness and diversity and low incidence of nonindigenous species. Most oceanographic features were also similar between sanctuary and non-sanctuary locations. Exceptions (e.g., higher concentrations of some nutrients in sanctuaries along the CA coast) appeared to be attributable to natural upwelling events in the area at the time of sampling. In addition, sediments within the sanctuaries were relatively uncontaminated, with none of the samples having any measured chemical in excess of ERM values. The ERL value for chromium was exceeded in sediments at the OCNMS, but at a much lower percentage of stations (four of 30) compared to WA and OR non-sanctuary areas (31 of 70 stations). ERL values were exceeded for arsenic, cadmium, chromium, 2- methylnaphthalene, low molecular weight PAHs, total DDT, and 4,4′-DDE at multiple sites within the CINMS. However, cases where total DDT, 4,4′-DDE, and chromium exceeded the ERL values were notably less prevalent at CINMS than in non-sanctuary waters of CA. In contrast, 2-methylnaphthalene above the ERL was much more prevalent in sediments at the CINMS compared to non-sanctuary waters off the coast of CA. While there are natural background sources of PAHs from oil seeps throughout the SCB, this does not explain the higher incidence of 2-methylnaphthalene contamination around CINMS. Two stations in CINMS also had levels of TOC (> 5%) potentially harmful to benthic fauna, though none of these sites exhibited symptoms of impaired benthic condition. This study showed no major evidence of extensive biological impacts linked to measured stressors. There were only two stations, both in CA, where low numbers of benthic species, diversity, or total faunal abundance co-occurred with high sediment contamination or low DO in bottom water. Such general lack of concordance suggests that these offshore waters are currently in good condition, with the lower-end values of the various biological attributes representing parts of a normal reference range controlled by natural factors. Results of multiple linear regression, performed using full model procedures to test for effects of combined abiotic environmental factors, suggested that latitude and depth had significant influences on benthic variables regionwide. Latitude had a significant inverse influence on all three of the above benthic variables, i.e. with values increasing as latitude decreased (p< 0.01), while depth had a significant direct influence on diversity (p < 0.001) and inverse effect on density (p <0.01). None of these variables varied significantly in relation to sediment % fines (at p< 0.1), although in general there was a tendency for muddier sediments (higher % fines) to have lower species richness and diversity and higher densities than coarser sediments. Alternatively, it is possible that for some of these sites the lower values of benthic variables reflect symptoms of disturbance induced by other unmeasured stressors. The indicators in this study included measures of stressors (e.g., chemical contaminants, eutrophication) that are often associated with adverse biological impacts in shallower estuarine and inland ecosystems. However, there may be other sources of humaninduced stress in these offshore systems (e.g., bottom trawling) that pose greater risks to ambient living resources and which have not been captured. Future monitoring efforts in these offshore areas should include indicators of such alternative sources of disturbance. (137pp.) (PDF contains 167 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conduct experiments to investigate the effects of different majority requirements on bargaining outcomes in small and large groups. In particular, we use a Baron-Ferejohn protocol and investigate the effects of decision rules on delay (number of bargaining rounds needed to reach agreement) and measures of "fairness" (inclusiveness of coalitions, equality of the distribution within a coalition). We find that larger groups and unanimity rule are associated with significantly larger decision making costs in the sense that first round proposals more often fail, leading to more costly delay. The higher rate of failure under unanimity rule and in large groups is a combination of three facts: (1) in these conditions, a larger number of individuals must agree, (2) an important fraction of individuals reject offers below the equal share, and (3) proposers demand more (relative to the equal share) in large groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Employee-owned businesses have recently enjoyed a resurgence of interest as possible ‘alternatives’ to the somewhat tarnished image of conventional investor-owned capitalist firms. Within the context of global economic crisis, such alternatives seem newly attractive. This is somewhat ironic because, for more than a century, academic literature on employee-owned businesses has been dominated by the ‘degeneration thesis’. This suggested that these businesses tend towards failure – they either fail commercially, or they relinquish their democratic characters. Bucking this trend and offering a beacon - especially in the UK - has been the commercially successful, co-owned enterprise of the John Lewis Partnership (JLP) whose virtues have seemingly been rewarded with favourable and sustainable outcomes. This paper makes comparisons between JLP and its Spanish equivalent Eroski – the supermarket group which is part of the Mondragon cooperatives. The contribution of this paper is to examine in a comparative way how the managers in JLP and Eroski have constructed and accomplished their alternative scenarios. Using longitudinal data and detailed interviews with senior managers in both enterprises it explores the ways in which two large, employee-owned, enterprises reconcile apparently conflicting principles and objectives. The paper thus puts some new flesh on the ‘regeneration thesis’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.

As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.

One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.

Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters are the largest gravitationally bound objects in the observable universe, and they are formed from the largest perturbations of the primordial matter power spectrum. During initial cluster collapse, matter is accelerated to supersonic velocities, and the baryonic component is heated as it passes through accretion shocks. This process stabilizes when the pressure of the bound matter prevents further gravitational collapse. Galaxy clusters are useful cosmological probes, because their formation progressively freezes out at the epoch when dark energy begins to dominate the expansion and energy density of the universe. A diverse set of observables, from radio through X-ray wavelengths, are sourced from galaxy clusters, and this is useful for self-calibration. The distributions of these observables trace a cluster's dark matter halo, which represents more than 80% of the cluster's gravitational potential. One such observable is the Sunyaev-Zel'dovich effect (SZE), which results when the ionized intercluster medium blueshifts the cosmic microwave background via Compton scattering. Great technical advances in the last several decades have made regular observation of the SZE possible. Resolved SZE science, such as is explored in this analysis, has benefitted from the construction of large-format camera arrays consisting of highly sensitive millimeter-wave detectors, such as Bolocam. Bolocam is a submillimeter camera, sensitive to 140 GHz and 268 GHz radiation, located at one of the best observing sites in the world: the Caltech Submillimeter Observatory on Mauna Kea in Hawaii. Bolocam fielded 144 of the original spider web NTD bolometers used in an entire generation of ground-based, balloon-borne, and satellite-borne millimeter wave instrumention. Over approximately six years, our group at Caltech has developed a mature galaxy cluster observational program with Bolocam. This thesis describes the construction of the instrument's full cluster catalog: BOXSZ. Using this catalog, I have scaled the Bolocam SZE measurements with X-ray mass approximations in an effort to characterize the SZE signal as a viable mass probe for cosmology. This work has confirmed the SZE to be a low-scatter tracer of cluster mass. The analysis has also revealed how sensitive the SZE-mass scaling is to small biases in the adopted mass approximation. Future Bolocam analysis efforts are set on resolving these discrepancies by approximating cluster mass jointly with different observational probes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A long-standing challenge in transition metal catalysis is selective C–C bond coupling of simple feedstocks, such as carbon monoxide, ethylene or propylene, to yield value-added products. This work describes efforts toward selective C–C bond formation using early- and late-transition metals, which may have important implications for the production of fuels and plastics, as well as many other commodity chemicals.

The industrial Fischer-Tropsch (F-T) process converts synthesis gas (syngas, a mixture of CO + H2) into a complex mixture of hydrocarbons and oxygenates. Well-defined homogeneous catalysts for F-T may provide greater product selectivity for fuel-range liquid hydrocarbons compared to traditional heterogeneous catalysts. The first part of this work involved the preparation of late-transition metal complexes for use in syngas conversion. We investigated C–C bond forming reactions via carbene coupling using bis(carbene)platinum(II) compounds, which are models for putative metal–carbene intermediates in F-T chemistry. It was found that C–C bond formation could be induced by either (1) chemical reduction of or (2) exogenous phosphine coordination to the platinum(II) starting complexes. These two mild methods afforded different products, constitutional isomers, suggesting that at least two different mechanisms are possible for C–C bond formation from carbene intermediates. These results are encouraging for the development of a multicomponent homogeneous catalysis system for the generation of higher hydrocarbons.

A second avenue of research focused on the design and synthesis of post-metallocene catalysts for olefin polymerization. The polymerization chemistry of a new class of group 4 complexes supported by asymmetric anilide(pyridine)phenolate (NNO) pincer ligands was explored. Unlike typical early transition metal polymerization catalysts, NNO-ligated catalysts produce nearly regiorandom polypropylene, with as many as 30-40 mol % of insertions being 2,1-inserted (versus 1,2-inserted), compared to <1 mol % in most metallocene systems. A survey of model Ti polymerization catalysts suggests that catalyst modification pathways that could affect regioselectivity, such as C–H activation of the anilide ring, cleavage of the amine R-group, or monomer insertion into metal–ligand bonds are unlikely. A parallel investigation of a Ti–amido(pyridine)phenolate polymerization catalyst, which features a five- rather than a six-membered Ti–N chelate ring, but maintained a dianionic NNO motif, revealed that simply maintaining this motif was not enough to produce regioirregular polypropylene; in fact, these experiments seem to indicate that only an intact anilide(pyridine)phenolate ligated-complex will lead to regioirregular polypropylene. As yet, the underlying causes for the unique regioselectivity of anilide(pyridine)phenolate polymerization catalysts remains unknown. Further exploration of NNO-ligated polymerization catalysts could lead to the controlled synthesis of new types of polymer architectures.

Finally, we investigated the reactivity of a known Ti–phenoxy(imine) (Ti-FI) catalyst that has been shown to be very active for ethylene homotrimerization in an effort to upgrade simple feedstocks to liquid hydrocarbon fuels through co-oligomerization of heavy and light olefins. We demonstrated that the Ti-FI catalyst can homo-oligomerize 1-hexene to C12 and C18 alkenes through olefin dimerization and trimerization, respectively. Future work will include kinetic studies to determine monomer selectivity by investigating the relative rates of insertion of light olefins (e.g., ethylene) vs. higher α-olefins, as well as a more detailed mechanistic study of olefin trimerization. Our ultimate goal is to exploit this catalyst in a multi-catalyst system for conversion of simple alkenes into hydrocarbon fuels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface: The main goal of this work is to give an introductory account of sieve methods that would be understandable with only a slight knowledge of analytic number theory. These notes are based to a large extent on lectures on sieve methods given by Professor Van Lint and the author in a number theory seminar during the 1970-71 academic year, but rather extensive changes have been made in both the content and the presentation...

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The properties of noncollinear optical parametric amplification (NOPA) based on quasi-phase matching of periodically poled crystals are investigated, under the condition that the group velocity matching (GVM) of the signal and idler pulses is satisfied. Our study focuses on the dependence of the gain spectrum upon the noncollinear angle, crystal temperature, and crystal angle with periodically poled KTiOPO4 (PPKTP), periodically poled LiNbO3 (PPLN), and periodically poled LiTaO3 (PPLT), and the NOPA gain properties of the three crystals are compared. Broad gain bandwidth exists above 85 nm at a signal wavelength of 800 nm with a 532 nm pump pulse, with proper noncollinear angle and grating period at a fixed temperature for GVM. Deviation from the group-velocity-matched noncollinear angle can be compensated by accurately tuning the crystal angle or temperature with a fixed grating period for phase matching. Moreover, there is a large capability of crystal angle tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.

Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.

Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper summarises a meeting which discussed the ecology and conservation of Llangorse Lake in South Wales. The meeting was organised by the British Ecological Society (Aquatic Ecology Group), in association with the Countryside Council for Wales (CCW), Brecon Beacon National Park Authority (BBNPA) and Environment Agency Wales. It took place on 22 October 1998.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.

We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.

We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.

We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.