957 resultados para 090604 Microelectronics and Integrated Circuits
Resumo:
Silicon microlenses are a very important tool for coupling terahertz (THz) radiation into antennas and detectors in integrated circuits. They can be used in a large array structures at this frequency range reducing considerably the crosstalk between the pixels. Drops of photoresist have been deposited and their shape transferred into the silicon by means of a Reactive Ion Etching (RIE) process. Large silicon lenses with a few mm diameter (between 1.5 and 4.5 mm) and hundreds of μm height (between 50 and 350 μm) have been fabricated. The surface of such lenses has been characterized using Scanning Electron Microscopy (SEM) and Atomic Force Microscopy (AFM), resulting in a surface roughness of about ∼3 μm, good enough for any THz application. The beam profile at the focal plane of such lenses has been measured at a wavelength of 10.6 μm using a tomographic knife-edge technique and a CO2 laser.
Resumo:
Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved. Acknowledgements This review is one of a series of systematic reviews for the ROMEO project (Review Of MEn and Obesity), funded by the National Institute for Health Research, Health Technology Assessment Programme (NIHR HTA Project 09/127/01; Systematic reviews and integrated report on the quantitative and qualitative evidence base for the management of obesity in men http://www.hta.ac.uk/2545). The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Department of Health. HERU, HSRU and NMAHP are funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. The authors accept full responsibility for this publication. We would also like to thank the Men's Health Forums of Scotland, Ireland, England and Wales: Tim Street, Paula Carroll, Colin Fowler and David Wilkins. We also thank Kate Jolly for further information about the Lighten Up trial.
Resumo:
Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved. Acknowledgements This review is one of a series of systematic reviews for the ROMEO project (Review Of MEn and Obesity), funded by the National Institute for Health Research, Health Technology Assessment Programme (NIHR HTA Project 09/127/01; Systematic reviews and integrated report on the quantitative and qualitative evidence base for the management of obesity in men http://www.hta.ac.uk/2545). The views and opinions expressed therein are those of the authors and do not necessarily reflect those of the Department of Health. HERU, HSRU and NMAHP are funded by the Chief Scientist Office of the Scottish Government Health and Social Care Directorates. The authors accept full responsibility for this publication. We would also like to thank the Men's Health Forums of Scotland, Ireland, England and Wales: Tim Street, Paula Carroll, Colin Fowler and David Wilkins. We also thank Kate Jolly for further information about the Lighten Up trial.
Resumo:
This work looks at the effect on mid-gap interface state defect density estimates for In0.53Ga0.47As semiconductor capacitors when different AC voltage amplitudes are selected for a fixed voltage bias step size (100 mV) during room temperature only electrical characterization. Results are presented for Au/Ni/Al2O3/In0.53Ga0.47As/InP metal–oxide–semiconductor capacitors with (1) n-type and p-type semiconductors, (2) different Al2O3 thicknesses, (3) different In0.53Ga0.47As surface passivation concentrations of ammonium sulphide, and (4) different transfer times to the atomic layer deposition chamber after passivation treatment on the semiconductor surface—thereby demonstrating a cross-section of device characteristics. The authors set out to determine the importance of the AC voltage amplitude selection on the interface state defect density extractions and whether this selection has a combined effect with the oxide capacitance. These capacitors are prototypical of the type of gate oxide material stacks that could form equivalent metal–oxide–semiconductor field-effect transistors beyond the 32 nm technology node. The authors do not attempt to achieve the best scaled equivalent oxide thickness in this work, as our focus is on accurately extracting device properties that will allow the investigation and reduction of interface state defect densities at the high-k/III–V semiconductor interface. The operating voltage for future devices will be reduced, potentially leading to an associated reduction in the AC voltage amplitude, which will force a decrease in the signal-to-noise ratio of electrical responses and could therefore result in less accurate impedance measurements. A concern thus arises regarding the accuracy of the electrical property extractions using such impedance measurements for future devices, particularly in relation to the mid-gap interface state defect density estimated from the conductance method and from the combined high–low frequency capacitance–voltage method. The authors apply a fixed voltage step of 100 mV for all voltage sweep measurements at each AC frequency. Each of these measurements is repeated 15 times for the equidistant AC voltage amplitudes between 10 mV and 150 mV. This provides the desired AC voltage amplitude to step size ratios from 1:10 to 3:2. Our results indicate that, although the selection of the oxide capacitance is important both to the success and accuracy of the extraction method, the mid-gap interface state defect density extractions are not overly sensitive to the AC voltage amplitude employed regardless of what oxide capacitance is used in the extractions, particularly in the range from 50% below the voltage sweep step size to 50% above it. Therefore, the use of larger AC voltage amplitudes in this range to achieve a better signal-to-noise ratio during impedance measurements for future low operating voltage devices will not distort the extracted interface state defect density.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
The rate of accumulation of a ferromanganese coating on a fragment of pillow basalt was estimated using a variety of techniques. Unsupported 230 Th activity decrease in the oxide layer, K/A dating of the basalt, fission tracks dating of the glassy layer around the basalt, thickness of the palagonitization rind, and integrated 230 Th activity give ages from approximately 3 x 10-6 years to 5 x 10-3 years. Data suggest that the ferromanganese material formed rapidly (33 mm/10-6 years) and by hydrothermal or volcanic processes.
Resumo:
The Olivia framework is a set of concepts and measures that, when mature, will allow users to describe, in a consistent and integrated manner, everything about individuals and institutions that is of potential interest to social policy. The present paper summarizes the current stage of development in achieving this highly ambitious goal. The current version of the framework supports analysis of social trends and policy responses from many perspectives: • The point-in-time, resource-flow perspectives that underlie most traditional, economics-based policy analysis. • Life-course perspectives, including both transitions/trajectories analysis and asset-based analysis. • Spatial perspectives that anchor people in space and history and that provide a link to macro-analysis. • The perspective of the purposes/goals of individuals and institutions, including the objectives of different types of government programming. The concepts of the framework, which are all potentially measurable, provide a language that can support integrated analysis in all these areas at a much finer level of description than is customary. It provides a language that is especially well suited for analysis of the incremental policy changes that are typical of a mature welfare state. It supports both qualitative and quantitative analysis, enabling some integration between the two. It supports citizen-centric as well as a government-centric view of social policy. In its current version, the concepts are most highly developed as they related to social policies as they related to labour markets, equality and social integration, care-giving, immigration, income security, sustainability, and social and economic well-being more generally. However the paper points to likely extensions in the areas of health, justice and safety.
Resumo:
Since the 1950s the global consumption of natural resources has skyrocketed, both in magnitude and in the range of resources used. Closely coupled with emissions of greenhouse gases, land consumption, pollution of environmental media, and degradation of ecosystems, as well as with economic development, increasing resource use is a key issue to be addressed in order to keep the planet Earth in a safe and just operating space. This requires thinking about absolute reductions in resource use and associated environmental impacts, and, when put in the context of current re-focusing on economic growth at the European level, absolute decoupling, i.e., maintaining economic development while absolutely reducing resource use and associated environmental impacts. Changing behavioural, institutional and organisational structures that lock-in unsustainable resource use is, thus, a formidable challenge as existing world views, social practices, infrastructures, as well as power structures, make initiating change difficult. Hence, policy mixes are needed that will target different drivers in a systematic way. When designing policy mixes for decoupling, the effect of individual instruments on other drivers and on other instruments in a mix should be considered and potential negative effects be mitigated. This requires smart and time-dynamic policy packaging. This Special Issue investigates the following research questions: What is decoupling and how does it relate to resource efficiency and environmental policy? How can we develop and realize policy mixes for decoupling economic development from resource use and associated environmental impacts? And how can we do this in a systemic way, so that all relevant dimensions and linkages—including across economic and social issues, such as production, consumption, transport, growth and wellbeing—are taken into account? In addressing these questions, the overarching goals of this Special Issue are to: address the challenges related to more sustainable resource-use; contribute to the development of successful policy tools and practices for sustainable development and resource efficiency (particularly through the exploration of socio-economic, scientific, and integrated aspects of sustainable development); and inform policy debates and policy-making. The Special Issue draws on findings from the EU and other countries to offer lessons of international relevance for policy mixes for more sustainable resource-use, with findings of interest to policy makers in central and local government and NGOs, decision makers in business, academics, researchers, and scientists.
Resumo:
Lactase persistence, the ability to digest the milk sugar lactose in adulthood, is highly associated with a T allele situated 13,910 bp upstream from the actual lactase gene in Europeans. The frequency of this allele rose rapidly in Europe after transition from hunter–gatherer to agriculturalist lifestyles and the introduction of milkable domestic species from Anatolia some 8000 years ago. Here we first introduce the archaeological and historic background of early farming life in Europe, then summarize what is known of the physiological and genetic mechanisms of lactase persistence. Finally, we compile the evidence for a co-evolutionary process between dairying culture and lactase persistence. We describe the different hypotheses on how this allele spread over Europe and the main evolutionary forces shaping this process. We also summarize three different computer simulation approaches, which offer a means of developing a coherent and integrated understanding of the process of spread of lactase persistence and dairying.
Resumo:
This paper presents the "state of the art" and some of the main issues discussed in relation to the topic of transnational migration and reproductive work in southern Europe. We start doing a genealogy of the complex theoretical development leading to the consolidation of the research program, linking consideration of gender with transnational migration and transformation of work and ways of survival, thus making the production aspects as reproductive, in a context of globalization. The analysis of the process of multiscale reconfiguration of social reproduction and care, with particular attention to its present global dimension is presented, pointing to the turning point of this line of research that would have taken place with the beginning of this century, with the rise notions such as "global care chains" (Hochschild, 2001), or "care drain" (Ehrenreich and Hochschild, 2013). Also, the role of this new agency, now composed in many cases women who migrate to other countries or continents, precisely to address these reproductive activities, is recognized. Finally, reference is made to some of the new conceptual and theoretical developments in this area.
Resumo:
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Abstract Background and Problem: The altering business world and the growing requests from stakeholders have resulted in the establishment of new reports. These are among others Sustainability reports and Integrated Reporting. On the contrary, traditional financial reports do not consider the significance of intangible assets in modern entities. The social and relationship capital has further shown to be important for firms, especially healthcare companies and pharmaceuticals, but is not as developed as other capitals within the <IR> framework and therefore not always included in annual reports. However too few disclosures within this area could lead to high liabilities. The IIRC launched the <IR> framework year 2013 as a solution, as it gives a more comprehensive view of the reporting entity. Within this framework there are six capitals: manufactured, human, financial, natural, intellectual and social and relationship. Purpose: The purpose of this thesis is to find out how the International <IR> Framework has influenced the reporting of the social and relationship disclosures within the healthcare industry, to compare the reporting of the six medical firms chosen and to examine how the social concerns have been developed over time. Delimitations: This study is conducted over a period of three years, from year 2012 to year 2014. It only examines healthcare companies which use the International <IR> framework and it has solely focus on the social and relationship capital. All other capitals within the <IR> framework are excluded from the study. Method: This study has a qualitative research strategy and is based on information collected from published documents in form of annual reports. The annual reports from year 2010, 2011 and 2012 are used to find social and relationship disclosures and a disclosure scoreboard is used to find similarities, differences and patterns. Empirical Results and Conclusion: It has been found that the aggregated social and relationship disclosures have been reduced over time. The year followed by the release of the <IR> framework was seen to have the least disclosures and therefore conclusion was drawn that the <IR> framework had a negative influence on the social and relationship disclosures. There were also differences among the companies studied both in extent and content. The former could be linked to factors such as size and nationality and the latter could be linked to reputation preservation and legitimacy interests.
Resumo:
La Formule SAE (Society of Automotive Engineers) est une compétition étudiante consistant en la conception et la fabrication d’une voiture de course monoplace. De nombreux événements sont organisés à chaque année au cours desquels plusieurs universités rivalisent entre elles lors d’épreuves dynamiques et statiques. Celles-ci comprennent l’évaluation de la conception, l’évaluation des coûts de fabrication, l’accélération de la voiture, etc. Avec plus de 500 universités participantes et des événements annuels sur tous les continents, il s’agit de la plus importante compétition d’ingénierie étudiante au monde. L’équipe ULaval Racing a participé pendant plus de 20 ans aux compétitions annuelles réservées aux voitures à combustion. Afin de s’adapter à l’électrification des transports et aux nouvelles compétitions destinées aux voitures électriques, l’équipe a conçu et fabriqué une chaîne de traction électrique haute performance destinée à leur voiture 2015. L’approche traditionnelle employée pour concevoir une motorisation électrique consiste à imposer les performances désirées. Ces critères comprennent l’inclinaison maximale que la voiture doit pouvoir gravir, l’autonomie désirée ainsi qu’un profil de vitesse en fonction du temps, ou tout simplement un cycle routier. Cette approche n’est malheureusement pas appropriée pour la conception d’une traction électrique pour une voiture de type Formule SAE. Ce véhicule n’étant pas destiné à la conduite urbaine ou à la conduite sur autoroute, les cycles routiers existants ne sont pas représentatifs des conditions d’opération du bolide à concevoir. Ainsi, la réalisation de ce projet a nécessité l’identification du cycle d’opération routier sur lequel le véhicule doit opérer. Il sert de point de départ à la conception de la chaîne de traction composée des moteurs, de la batterie ainsi que des onduleurs de tension. L’utilisation d’une méthode de dimensionnement du système basée sur un algorithme d’optimisation génétique, suivie d’une optimisation locale couplée à une analyse par éléments-finis a permis l’obtention d’une solution optimale pour les circuits de type Formule SAE. La chaîne de traction conçue a été fabriquée et intégrée dans un prototype de voiture de l’équipe ULaval Racing lors de la saison 2015 afin de participer à diverses compétitions de voitures électriques.