918 resultados para Intensive and extensive margin
Resumo:
World and UK energy resources and use are reviewed and the role of energy conservation in energy policy identified. In considering various energy conservation measures, a distinction is made between energy intensive and non-intensive industries and also between direct and indirect uses of energy. Particular attention is given to the non-intensive user of energy. Energy use on one such industrial site has been studied to determine the most effective energy saving measures in the short term. Here it is estimated that over 65% of energy is consumed for indirect purposes, mainly for heating and lighting buildings. Emphasis is placed on energy auditing techniques and those energy saving measures requiring greater technical, economic and organisational resources to secure their implementation. Energy auditing techniques include the use of aerial thermography and snow formation surveys to detect heat losses. Qualitative and quantitative interpretations are carried out, but restricted mainly to evaluating building roof heat losses. From the energy auditing exercise, it is confirmed that the intermittent heating of buildings is the largest and most cost effective fuel saving measure. This was implemented on the site and a heat monitoring programme established to verify results. Industrial combined heat and power generation is investigated. A proposal for the site demonstrates that there are several obstacles to its successful implementation. By adopting an alternative financial rationale, a way of overcoming these obstacles is suggested. A useful by-product of the study is the classification of industrial sites according to the nature of industrial energy demand patterns. Finally, energy saving measures implemented on the site are quantlfied using comparative verification methods. Overall fuel savings of 13% are indicated. Cumulative savings in heating fuel amount to 26% over four years although heated area increased by approximately 25%.
Resumo:
This thesis analyses the impact of deregulation on the theory and practice of investment decision making in the electricity sector and appraises the likely effects on its long term future inefficiency. Part I describes the market and its shortcomings in promoting an optimal generation margin and plant mix and in reducing prices through competition. A full size operational model is developed to simulate hour by hour operation of the market and analyse its features. A relationship is established between the SMP and plant mix and between the LOLP and plant margin and it is shown bow a theoretical optimum can be derived when the combined LOLP payments and the capital costs of additional generation reach a minimum. A comparison of prices against an idealised bulk supply tariff is used to show how energy prices have risen some 12% in excess of what might have occurred under the CEGB regime. This part concludes with proposals to improve the marl
Resumo:
A novel simple all-optical nonlinear pulse processing technique using loop mirror intensity filtering and nonlinear broadening in normal dispersion fiber is described. The pulse processor offers reamplification and cleaning up of the optical signals and phase margin improvement. The efficiency of the technique is demonstrated by application to 40-Gb/s return-to-zero optical data streams.
Resumo:
A novel simple all-optical nonlinear pulse processing technique using loop mirror intensity filtering and nonlinear broadening in normal dispersion fiber is described. The pulse processor offers reamplification and cleaning up of the optical signals and phase margin improvement. The efficiency of the technique is demonstrated by application to 40-Gb/s return-to-zero optical data streams. © 2004 IEEE.
Resumo:
The changing business environment has sharpened the focus on the need for robust approaches to supply chain management (SCM) and the improvement of supply chain capability and performance. This is particularly the case in Ireland, which has the natural disadvantage of a location peripheral to significant markets and sources of raw materials which results in relatively high transport and distribution costs. Therefore, in order to gain insights into current levels of diffusion of SCM, a survey was conducted among 776 firms in the Republic of Ireland. The empirical results suggest that there is a need for more widespread adoption of SCM among Irish firms. This is particularly the case in relation to the four main elements of SCM excellence reported in this paper. The design of supply chain solutions is a highly skilled, knowledge-intensive and complex activity, reflected in a shift from 'box moving' to the design and implementation of customised supply chain solutions. Education and training needs to be addressed by stimulating the development of industry-relevant logistics and SCM resources and skills.
Resumo:
Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the animal behaviour domain. Our objective was to see how much could be done in a simple and relatively rapid manner using a corpus of journal papers. We used a sequence of pre-existing text processing steps, and here describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a number of hierarchies. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus. Results - Using mainly automated techniques, we were able to construct an 18055 term ontology-like structure with 73% recall of animal behaviour terms, but a precision of only 26%. We were able to clean unwanted terms from the nascent ontology using lexico-syntactic patterns that tested the validity of term inclusion within the ontology. We used the same technique to test for subsumption relationships between the remaining terms to add structure to the initially broad and shallow structure we generated. All outputs are available at http://thirlmere.aston.ac.uk/~kiffer/animalbehaviour/ webcite. Conclusion - We present a systematic method for the initial steps of ontology or structured vocabulary construction for scientific domains that requires limited human effort and can make a contribution both to ontology learning and maintenance. The method is useful both for the exploration of a scientific domain and as a stepping stone towards formally rigourous ontologies. The filtering of recognised terms from a heterogeneous corpus to focus upon those that are the topic of the ontology is identified to be one of the main challenges for research in ontology learning.
Resumo:
Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the Animal Behaviour domain. Our objective was to see how much could be done in a simple and rapid manner using a corpus of journal papers. We used a sequence of text processing steps, and describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a hierarchy. We were able in a very short space of time to construct a 17000 term ontology with a high percentage of suitable terms. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus.
Resumo:
This tenth edition of the established Textbook on Contract Law by Jill Poole provides a wide-ranging and straightforward exposition of contract law. The text opens with an overview of the main issues surrounding contract law, which places the subject in its wider context, then goes on to give a clear explanation of all the major areas of contract law encountered on undergraduate courses. Features of the book include chapter summaries to draw key themes and issues together; examples and questions to encourage a deeper understanding of the often complex points of law; and extensive further reading lists of both texts and articles to guide students towards the most relevant and up-to-date resources available. Online resource centre Lecturer resources - Testbank of multiple choice questions Student resources - Guidance on answering problem-style questions in contract law - Self test questions and answers - Student questions - Updates - 'Ask the Author' section
Resumo:
One of the most pressing demands on electrophysiology applied to the diagnosis of epilepsy is the non-invasive localization of the neuronal generators responsible for brain electrical and magnetic fields (the so-called inverse problem). These neuronal generators produce primary currents in the brain, which together with passive currents give rise to the EEG signal. Unfortunately, the signal we measure on the scalp surface doesn't directly indicate the location of the active neuronal assemblies. This is the expression of the ambiguity of the underlying static electromagnetic inverse problem, partly due to the relatively limited number of independent measures available. A given electric potential distribution recorded at the scalp can be explained by the activity of infinite different configurations of intracranial sources. In contrast, the forward problem, which consists of computing the potential field at the scalp from known source locations and strengths with known geometry and conductivity properties of the brain and its layers (CSF/meninges, skin and skull), i.e. the head model, has a unique solution. The head models vary from the computationally simpler spherical models (three or four concentric spheres) to the realistic models based on the segmentation of anatomical images obtained using magnetic resonance imaging (MRI). Realistic models – computationally intensive and difficult to implement – can separate different tissues of the head and account for the convoluted geometry of the brain and the significant inter-individual variability. In real-life applications, if the assumptions of the statistical, anatomical or functional properties of the signal and the volume in which it is generated are meaningful, a true three-dimensional tomographic representation of sources of brain electrical activity is possible in spite of the ‘ill-posed’ nature of the inverse problem (Michel et al., 2004). The techniques used to achieve this are now referred to as electrical source imaging (ESI) or magnetic source imaging (MSI). The first issue to influence reconstruction accuracy is spatial sampling, i.e. the number of EEG electrodes. It has been shown that this relationship is not linear, reaching a plateau at about 128 electrodes, provided spatial distribution is uniform. The second factor is related to the different properties of the source localization strategies used with respect to the hypothesized source configuration.
Resumo:
Iridium nanoparticles deposited on a variety of surfaces exhibited thermal sintering characteristics that were very strongly correlated with the lability of lattice oxygen in the supporting oxide materials. Specifically, the higher the lability of oxygen ions in the support, the greater the resistance of the nanoparticles to sintering in an oxidative environment. Thus with γ-Al2O3 as the support, rapid and extensive sintering occurred. In striking contrast, when supported on gadolinia-ceria and alumina-ceria-zirconia composite, the Ir nanoparticles underwent negligible sintering. In keeping with this trend, the behavior found with yttria-stabilized zirconia was an intermediate between the two extremes. This resistance, or lack of resistance, to sintering is considered in terms of oxygen spillover from support to nanoparticles and discussed with respect to the alternative mechanisms of Ostwald ripening versus nanoparticle diffusion. Activity towards the decomposition of N2O, a reaction that displays pronounced sensitivity to catalyst particle size (large particles more active than small particles), was used to confirm that catalytic behavior was consistent with the independently measured sintering characteristics. It was found that the nanoparticle active phase was Ir oxide, which is metallic, possibly present as a capping layer. Moreover, observed turnover frequencies indicated that catalyst-support interactions were important in the cases of the sinter-resistant systems, an effect that may itself be linked to the phenomena that gave rise to materials with a strong resistance to nanoparticle sintering.
Resumo:
This paper reflects a research project on the influence of online news media (from print, radio, and televised outlets) on disaster response. Coverage on the October 2010 Indonesian tsunami and earthquake was gathered from 17 sources from October 26 through November 30. This data was analyzed quantitatively with respect to coverage intensity over time and among outlets. Qualitative analyses were also conducted using keywords and value scale that assessed the degree of positivity or negativity associated with that keyword in the context of accountability. Results yielded insights into the influence of online media on actors' assumption of accountability and quality of response. It also provided information as to the optimal time window in which advocates and disaster management specialists can best present recommendations to improve policy and raise awareness. Coverage of outlets was analyzed individually, in groups, and as a whole, in order to discern behavior patterns for a better understanding of media interdependency. This project produced analytical insights but is primarily intended as a prototype for more refined and extensive research.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^
Resumo:
Hole 997A was drilled during Leg 164 of the Ocean Drilling Program at a depth of 2770 m on the topographic crest of the Blake Ridge in the western Atlantic Ocean. We report here an analysis of the faunal assemblages of planktonic foraminifers in a total of 91 samples (0.39-91.89 mbsf interval) spanning the last 2.15 m.y., latest Pliocene to Holocene. The abundant species, Globigerinoides ruber, Globigerinoides sacculifer, Neogloboquadrina dutertrei, Globorotalia inflata, and Globigerinita glutinata together exceed over ~70% of the total fauna. Each species exhibits fluctuations with amplitudes of 10%-20% or more. Despite their generally low abundance, the distinct presence/absence behavior of the Globorotalia menardii group is almost synchronous with glacial-interglacial climate cycles during the upper part of Brunhes Chron. The quantitative study and factor analysis of planktonic foraminiferal assemblages shows that the planktonic foraminiferal fauna in Hole 997A consists of four groups: warm water, subtropical gyre (mixed-layer species), gyre margin (thermocline/upwelling species), and subpolar assemblages. The subtropical gyre assemblage dominates throughout the studied section, whereas the abundance of gyre margin taxa strongly control the overall variability in faunal abundance at Site 997. In sediments older than the Olduvai Subchron, the planktonic foraminiferal faunas are characterized by fluctuations in both the subtropical gyre and gyre margin assemblages, similar to those in the Brunhes Chron. The upwelling/gyre margin fauna increased in abundance just before the Jaramillo Subchron and was dominant between 0.7 and 1.07 Ma. The transition from this gyre margin-dominated assemblage to an increase in abundance of the subtropical gyre and gyre margin species occurred around 0.7 Ma, near the Brunhes/Matuyama boundary. The presence of low-oxygen-tolerant benthic foraminifers, pyrite tubes, and abundant diatoms below the Brunhes/Matuyama boundary suggests decreased oxygenation of intermediate waters and more upwelling over the Blake-Bahama Outer Ridge, perhaps because of weaker Upper North Atlantic Deep Water ventilation. The changes in the relative composition of foraminifer assemblages took place at least twice, around 700 and 1000 ka, close to the ~930-ka switch from obliquity-forced climate variation to the 100-k.y. eccentricity cycle. The climate shift at 700 ka suggests a transition from relatively warmer conditions in the early Pleistocene to warm-cool oscillations in the Brunhes Chron.
Resumo:
The Late Weichselian-Early Holocene variability of the North Atlantic Current has been studied with focus on the zonal component of this meridional transport during the transition from glacial to interglacial conditions. The investigated sediment core is from 409 m water depth in the SW Barents Sea. Eight Accelerator mass spectrometry (AMS) 14C dates show that the core covers the last 20,000 cal yr B.P. with a centennial scale resolution during Late Weichselian-Early Holocene. Planktic foraminiferal assemblages were analyzed using the >100 ?m size fraction and foraminiferal planktic and benthic d13C and d18O isotopes were measured. Furthermore, a range of physical and chemical analyses has been carried out on the bulk sediment samples. Four time periods have been identified which represent the varying oceanographic conditions in Ingøydjupet, a glacial trough located off the north coast of Norway in the SW Barents Sea. 1) The late glacial (before ca 15,000 cal yr B.P.) influenced by the nearby ice sheets with high amounts of sea ice- or iceberg-transported detritus. 2) The late Oldest Dryas stadial and the Bølling-Allerød interstadial (ca 15,000-12,700 cal yr B.P.) with cold surface water conditions influenced by the collapse of the nearby ice sheets, high amounts of sea ice- or iceberg-transported detritus and melt water and weak subsurface inflow of Atlantic Water. 3) The Younger Dryas cold stadial (12,700-11,650 cal yr B.P.) with low primary productivity and extensive sea ice cover and 4) The Preboreal and Early Holocene (11,650-6800 cal yr B.P. cal yr B.P.) with strong influx of Atlantic Water into the area, near absence of ice rafted debris and generally ameliorated conditions in both surface and bottom water masses as seen from a high flux of foraminifera and increased marine primary production.
Resumo:
The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.