959 resultados para default externalities


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information and Communication Technology (ICT) is becoming increasingly central to many people’s lives, making it possible to be connected in any place at any time, be unceasingly and instantly informed, and benefit from greater economic and educational opportunities. With all the benefits afforded by these new-found capabilities, however, come potential drawbacks. A plethora of new PCs, laptops, tablets, smartphones, Bluetooth, the internet, Wi-Fi (the list goes on) expect us to know or be able to guess, what, where and when to connect, click, double-click, tap, flick, scroll, in order to realise these benefits, and to have the physical and cognitive capability to do all these things. One of the groups most affected by this increase in high-demand technology is older people. They do not understand and use technology in the same way that younger generations do, because they grew up in the simpler electro-mechanical era and embedded that particular model of the world in their minds. Any consequential difficulty in familiarising themselves with modern ICT and effectively applying it to their needs can also be exacerbated by age-related changes in vision, motor control and cognitive functioning. Such challenges lead to digital exclusion. Much has been written about this topic over the years, usually by academics from the area of inclusive product design. The issue is complex and it is fair to say that no one researcher has the whole picture. It is difficult to understand and adequately address the issue of digital exclusion among the older generation without looking across disciplines and at industry’s and government’s understanding, motivation and efforts toward resolving this important problem. To do otherwise is to risk misunderstanding the true impact that ICT has and could have on people’s lives across all generations. In this European year of Active Ageing and Solidarity between Generations and as the British government is moving forward with its Digital by Default initiative as part of a wider objective to make ICT accessible to as many people as possible by 2015, the Engineering Design Centre (EDC) at the University of Cambridge collaborated with BT to produce a book of thought pieces to address, and where appropriate redress, these important and long-standing issues. “Ageing, Adaption and Accessibility: Time for the Inclusive Revolution!” brings together opinions and insights from twenty one prominent thought leaders from government, industry and academia regarding the problems, opportunities and strategies for combating digital exclusion among senior citizens. The contributing experts were selected as individuals, rather than representatives of organisations, to provide the broadest possible range of perspectives. They are renowned in their respective fields and their opinions are formed not only from their own work, but also from the contributions of others in their area. Their views were elicited through conversations conducted by the editors of this book who then drafted the thought pieces to be edited and approved by the experts. We hope that this unique collection of thought pieces will give you a broader perspective on ageing, people’s adaption to the ever changing world of technology and insights into better ways of designing digital devices and services for the older population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ideally, one would like to perform image search using an intuitive and friendly approach. Many existing image search engines, however, present users with sets of images arranged in some default order on the screen, typically the relevance to a query, only. While this certainly has its advantages, arguably, a more flexible and intuitive way would be to sort images into arbitrary structures such as grids, hierarchies, or spheres so that images that are visually or semantically alike are placed together. This paper focuses on designing such a navigation system for image browsers. This is a challenging task because arbitrary layout structure makes it difficult - if not impossible - to compute cross-similarities between images and structure coordinates, the main ingredient of traditional layouting approaches. For this reason, we resort to a recently developed machine learning technique: kernelized sorting. It is a general technique for matching pairs of objects from different domains without requiring cross-domain similarity measures and hence elegantly allows sorting images into arbitrary structures. Moreover, we extend it so that some images can be preselected for instance forming the tip of the hierarchy allowing to subsequently navigate through the search results in the lower levels in an intuitive way. Copyright 2010 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biofuels are increasingly promoted worldwide as a means for reducing greenhouse gas (GHG) emissions from transport. However, current regulatory frameworks and most academic life cycle analyses adopt a deterministic approach in determining the GHG intensities of biofuels and thus ignore the inherent risk associated with biofuel production. This study aims to develop a transparent stochastic method for evaluating UK biofuels that determines both the magnitude and uncertainty of GHG intensity on the basis of current industry practices. Using wheat ethanol as a case study, we show that the GHG intensity could span a range of 40-110 gCO2e MJ-1 when land use change (LUC) emissions and various sources of uncertainty are taken into account, as compared with a regulatory default value of 44 gCO2e MJ-1. This suggests that the current deterministic regulatory framework underestimates wheat ethanol GHG intensity and thus may not be effective in evaluating transport fuels. Uncertainties in determining the GHG intensity of UK wheat ethanol include limitations of available data at a localized scale, and significant scientific uncertainty of parameters such as soil N2O and LUC emissions. Biofuel polices should be robust enough to incorporate the currently irreducible uncertainties and flexible enough to be readily revised when better science is available. © 2013 IOP Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Each stage in the life cycle of coal-extraction, transport, processing, and combustion-generates a waste stream and carries multiple hazards for health and the environment. These costs are external to the coal industry and are thus often considered "externalities." We estimate that the life cycle effects of coal and the waste stream generated are costing the U.S. public a third to over one-half of a trillion dollars annually. Many of these so-called externalities are, moreover, cumulative. Accounting for the damages conservatively doubles to triples the price of electricity from coal per kWh generated, making wind, solar, and other forms of nonfossil fuel power generation, along with investments in efficiency and electricity conservation methods, economically competitive. We focus on Appalachia, though coal is mined in other regions of the United States and is burned throughout the world.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we performed an evaluation of decay heat power of advanced, fast spectrum, lead and molten salt-cooled reactors, with flexible conversion ratio. The decay heat power was calculated using the BGCore computer code, which explicitly tracks over 1700 isotopes in the fuel throughout its burnup and subsequent decay. In the first stage, the capability of the BGCore code to accurately predict the decay heat power was verified by performing a benchmark calculation for a typical UO2 fuel in a Pressurized Water Reactor environment against the (ANSI/ANS-5.1-2005, "Decay Heat Power in Light Water Reactors," American National Standard) standard. Very good agreement (within 5%) between the two methods was obtained. Once BGCore calculation capabilities were verified, we calculated decay power for fast reactors with different coolants and conversion ratios, for which no standard procedure is currently available. Notable differences were observed for the decay power of the advanced reactor as compared with the conventional UO2 LWR. The importance of the observed differences was demonstrated by performing a simulation of a Station Blackout transient with the RELAP5 computer code for a lead-cooled fast reactor. The simulation was performed twice: using the code-default ANS-79 decay heat curve and using the curve calculated specifically for the studied core by BGCore code. The differences in the decay heat power resulted in failure to meet maximum cladding temperature limit criteria by ∼100 °C in the latter case, while in the transient simulation with the ANS-79 decay heat curve, all safety limits were satisfied. The results of this study show that the design of new reactor safety systems must be based on decay power curves specific to each individual case in order to assure the desired performance of these systems. © 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transmission Volume Phase Holographic Grating (VPHG) is adopted as spectral element in the real-time Optical Channel Performance Monitor (OCPM), which is in dire need in the Dense Wavelength -division-multiplexing(DATDM) system. And the tolerance of incident angle, which can be fully determined by two angles: 6 and (p, is finally inferred in this paper. Commonly, the default setting is that the incident plane is perpendicular to the fringes when the incident angle is mentioned. Now the situation out of the vertical is discussed. By combining the theoretic analysis of VPHG with its use in OCPM and changing 6 and (0 precisely in the computation and experiment, the two physical quantities which can fully specify the performance of VPHG the diffraction efficiency and the resolution, are analyzed. The results show that the diffraction efficiency varies greatly with the change of 6 or (p. But from the view of the whole C-band, only the peak diffraction efficiency drifts to another wavelength. As for the resolution, it deteriorates more rapidly than diffraction efficiency with the change of (p, while more slowly with the change of theta. Only if \phi\less than or equal to+/-1degrees and alpha(B) -0.5 less than or equal to theta less than or equal to alpha(B) + 0.5, the performance of the VPHG would be good enough to be used in OCPM system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

森林生态服务功能的评价及在人类采伐活动及对其影响的研究,是生态服务功能研究的重点内容,也是当前研究的重点方向。本文以白河林业局为研究区域,选择三个林场为典型代表,利用已有的模型,充分利用森林二类调查数据并结合FORESTAR决策支持系统,综合运用GIS技术和统计方法,研究了森林生态服务功能的时空变化,并通过采伐模拟决策,预测不同采伐方案对其服务功能的影响,旨在揭示采伐与森林生态服务功能间的关系;同时将经济补偿作为解决森林生态效益外部性问题的手段,建立了经济补偿的概念模型,较深入的探讨了补偿标准问题,得出以下主要结论:①森林的涵养水源、固土保肥、吸收二氧化碳、净化空气和抑制风沙的效益中,水源涵养功能单位价值量在各项效益价值量中贡献最大,在森林生态服务功能中占有极其重要的地位。②黄松蒲林场1987年和2000年森林服务功能变化主要是由于郁闭度和林龄的改变,森林资源空间分布的变化导致了生态服务功能的经济价值在20年间减少了近40%。③运用FORESTAR森林决策支持系统模拟不同采伐方案,采伐小班的各项单位效益值都有明显的下降;三个林场经过模拟采伐后总的经济价值分别减少了353.83元/hm~2、448.62元/hm~2和457.13元/hm~2。④将补偿标准用补偿系数和森林服务的价值量来表示,充分反映了在一定社会生产条件下人民生活水平下对森林这种公共产品的支付意愿和森林本身的性质,计算出的补偿系数为0.412,三个林场的补偿费年均分别为1067元/hm~2,1161元/hm~2和1314元/hm~2,同时对其与机会成本比较是单位面积机会成本的近3倍。⑤在补偿政策的实施中,建议建立和健全多层次的经济补偿制度,通过国家财政拨款、跨区域补偿以及当地税收调节等多种途径取得补偿资金,形成跨区域补偿以及在税收中加入生态税这一个项目,可以保证补偿资金来源。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simulating wave nearshore (SWAN) wave model has been widely used in coastal areas, lakes and estuaries. However, we found a poor agreement between modeling results and measurements in analyzing the chosen four typical cases when we used the default parameters of the source function formulas of the SWAN to make wave simulation for the Bohai Sea. Also, it was found that at the same wind process the simulated results of two wind generation expressions (Komen, Janssen) demonstrated a large difference. Further study showed that the proportionality coefficient alpha in linear growth term of wave growth source function plays an unperceived role in the process of wave development. Based on experiments and analysis, we thought that the coefficient alpha should change rather than be a constant. Therefore, the coefficient alpha changing with the variation of friction velocity U (*) was introduced into the linear growth term of wave growth source function. Four weather processes were adopted to validate the improvement in the linear growth term. The results from the improved coefficient alpha agree much better with the measurements than those from the default constant coefficient alpha. Furthermore, the large differences of results between Komen wind generation expression and Janssen wind generation expression were eliminated. We also experimented with the four weather processes to test the new white-capping mechanisms based on the cumulative steepness method. It was found that the parameters of the new white-capping mechanisms are not suitable for the Bohai Sea, but Alkyon's white-capping mechanisms can be applicable to the Bohai Sea after amendments, demonstrating that this improvement of parameter alpha can improve the simulated results of the Bohai Sea.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic exploration is the main tools of exploration for petroleum. as the society needs more petroleum and the level of exploration is going up, the exploration in the area of complex geology construction is the main task in oil industry, so the seismic prestack depth migration appeared, it has good ability for complex construction imaging. Its result depends on the velocity model strongly. So for seismic prestack depth migration has become the main research area. In this thesis the difference in seismic prestack depth migration between our country and the abroad has been analyzed in system. the tomographical method with no layer velocity model, the residual curve velocity analysical method based on velocity model and the deleting method in pre-processing have been developed. In the thesis, the tomographysical method in velocity analysis is been analyzed at first. It characterized with perfection in theory and diffculity in application. This method use the picked first arrivial, compare the difference between the picked first arrival and the calculated arrival in theory velocity model, and then anti-projected the difference along the ray path to get the new velocity model. This method only has the hypothesis of high frequency, no other hypothesis. So it is very effective and has high efficiency. But this method has default still. The picking of first arrival is difficult in the prestack data. The reasons are the ratio of signal to noise is very low and many other event cross each other in prestack data. These phenomenon appear strongly in the complex geology construction area. Based on these a new tomophysical methos in velocity analysis with no layer velocity model is been developed. The aim is to solve the picking problem. It do not need picking the event time contiunely. You can picking in random depending on the reliability. This methos not only need the pick time as the routine tomographysical mehtod, but also the slope of event. In this methos we use the high slope analysis method to improve the precision of picking. In addition we also make research on the residual curve velocity analysis and find that its application is not good and the efficiency is low. The reasons is that the hypothesis is rigid and it is a local optimizing method, it can solve seismic velocity problem in the area with laterical strong velocity variation. A new method is developed to improve the precision of velocity model building . So far the pattern of seismic prestack depth migration is the same as it aborad. Before the work of velocity building the original seismic data must been corrected on a datum plane, and then to make the prestack depth migration work. As we know the successful example is in Mexico bay. It characterized with the simple surface layer construction, the pre-precessing is very simple and its precision is very high. But in our country the main seismic work is in land, the surface layer is very complex, in some area the error of pre-precessing is big, it affect the velocity building. So based on this a new method is developed to delete the per-precessing error and improve the precision of velocity model building. Our main work is, (1) developing a effective tomographical velocity building method with no layer velocity model. (2) a new high resolution slope analysis method is developed. (3) developing a global optimized residual curve velocity buliding method based on velocity model. (4) a effective method of deleting the pre-precessing error is developing. All the method as listed above has been ceritified by the theorical calculation and the actual seismic data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eight experiments tested how object array structure and learning location influenced the establishing and utilization of self-to-object and object-to-object spatial representations in locomotion and reorientation. In Experiment 1 to 4, participants learned either at the periphery of or amidst regular or irregular object array, and then pointed to objects while blindfolded in three conditions: before turning (baseline), after rotating 240 degrees (updating), and after disorientation (disorientation). In Experiment 5 to 8, participants received instruction to keep track of self-to-object or object-to-object spatial representations before rotation. In each condition, the configuration error, which means the standard deviation of the means per target object of the signed pointing errors, was calculated as the index of the fidelity of representation used in each condition. Results indicate that participants form both self-to-object and object-to-object spatial representations after learning an object-array. Object-array structure influences the selection of representation during updating. By default, object-to-object spatial representation is updated when people learned the regular object-array structure, and self-to-object spatial representation is updated when people learned the irregular object array. But people could also update the other representation when they are required to do so. The fidelity of representations will confine this kind of “switch”. People could only “switch” from a low fidelity representation to a high fidelity representation or between two representations of similar fidelity. They couldn’t “switch” from a high fidelity representation to a low fidelity representation. Leaning location might influence the fidelity of representations. When people learned at the periphery of object array, they could acquire both self-to-object and object-to-object spatial representations of high fidelity. But when people learned amidst the object array, they could only acquire self-to-object spatial representation of high fidelity, and the fidelity of object-to-object spatial representation was low.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents experimental results that aimed to investigate the effects of soil liquefaction on the modal parameters (i.e. frequency and damping ratio) of pile-supported structures. The tests were carried out using the shaking table facility of the Bristol Laboratory for Advanced Dynamics Engineering (BLADE) at the University of Bristol (UK) whereby four pile-supported structures (two single piles and two pile groups) with and without superstructure mass were tested. The experimental investigation aimed to monitor the variation in natural frequency and damping of the four physical models at different degrees of excess pore water pressure generation and in full-liquefaction condition. The experimental results showed that the natural frequency of pile-supported structures may decrease considerably owing to the loss of lateral support offered by the soil to the pile. On the other hand, the damping ratio of structure may increase to values in excess of 20%. These findings have important design consequences: (a) for low-period structures, substantial reduction of spectral acceleration is expected; (b) during and after liquefaction, the response of the system may be dictated by the interactions of multiple loadings, that is, horizontal, axial and overturning moment, which were negligible prior to liquefaction; and (c) with the onset of liquefaction due to increased flexibility of pile-supported structure, larger spectral displacement may be expected, which in turn may enhance Pdelta effects and consequently amplification of overturning moment. Practical implications for pile design are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the past fifty years, the interest in issues beyond pure philology has been a watchword in comparative literary studies. Comparative studies, which by default employ a variety of methods, run the major risk – as the experience of American comparative literature shows – of descending into dangerous ‘everythingism’ or losing its identity. However, it performs well when literature remains one of the segments of comparison. In such instances, it proves efficacious in exploring the ‘correspondences of arts’, the problems of identity and multiculturalism as well as contributes to the research into the transfer of ideas. Hence, it delves into phenomena which exist on the borderlines of literature, fine arts and other fields of humanities, employing strategies of interpretation which are typical for each of those fields. This means that in the process there emerges a “borderline methodology”, whose distinctive feature is heterogeneity of conducting research. This, in turn, requires the scholar to be both ingenious and creative while selecting topics as well as to possess competence in literary studies and the related field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Internet has brought unparalleled opportunities for expanding availability of research by bringing down economic and physical barriers to sharing. The digitally networked environment promises to democratize access, carry knowledge beyond traditional research niches, accelerate discovery, encourage new and interdisciplinary approaches to ever more complex research challenges, and enable new computational research strategies. However, despite these opportunities for increasing access to knowledge, the prices of scholarly journals have risen sharply over the past two decades, often forcing libraries to cancel subscriptions. Today even the wealthiest institutions cannot afford to sustain all of the journals needed by their faculties and students. To take advantage of the opportunities created by the Internet and to further their mission of creating, preserving, and disseminating knowledge, many academic institutions are taking steps to capture the benefits of more open research sharing. Colleges and universities have built digital repositories to preserve and distribute faculty scholarly articles and other research outputs. Many individual authors have taken steps to retain the rights they need, under copyright law, to allow their work to be made freely available on the Internet and in their institutionâ s repository. And, faculties at some institutions have adopted resolutions endorsing more open access to scholarly articles. Most recently, on February 12, 2008, the Faculty of Arts and Sciences (FAS) at Harvard University took a landmark step. The faculty voted to adopt a policy requiring that faculty authors send an electronic copy of their scholarly articles to the universityâ s digital repository and that faculty authors automatically grant copyright permission to the university to archive and to distribute these articles unless a faculty member has waived the policy for a particular article. Essentially, the faculty voted to make open access to the results of their published journal articles the default policy for the Faculty of Arts and Sciences of Harvard University. As of March 2008, a proposal is also under consideration in the University of California system by which faculty authors would commit routinely to grant copyright permission to the university to make copies of the facultyâ s scholarly work openly accessible over the Internet. Inspired by the example set by the Harvard faculty, this White Paper is addressed to the faculty and administrators of academic institutions who support equitable access to scholarly research and knowledge, and who believe that the institution can play an important role as steward of the scholarly literature produced by its faculty. This paper discusses both the motivation and the process for establishing a binding institutional policy that automatically grants a copyright license from each faculty member to permit deposit of his or her peer-reviewed scholarly articles in institutional repositories, from which the works become available for others to read and cite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help from an Active Queue Management (AQM) policy, short connections tend to lose to long connections in their competition for bandwidth. This is because short connections do not gain detailed knowledge of the network state, and therefore they are doomed to be less competitive due to the conservative nature of the TCP congestion control algorithm. Inspired by the Differentiated Services (Diffserv) architecture, we propose to give preferential treatment to short connections inside the bottleneck queue, so that short connections experience less packet drop rate than long connections. This is done by employing the RIO (RED with In and Out) queue management policy which uses different drop functions for different classes of traffic. Our simulation results show that: (1) in a highly loaded network, preferential treatment is necessary to provide short TCP connections with better response time and fairness without hurting the performance of long TCP connections; (2) the proposed scheme still delivers packets in FIFO manner at each link, thus it maintains statistical multiplexing gain and does not misorder packets; (3) choosing a smaller default initial timeout value for TCP can help enhance the performance of short TCP flows, however not as effectively as our scheme and at the risk of congestion collapse; (4) in the worst case, our proposal works as well as a regular RED scheme, in terms of response time and goodput.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.