855 resultados para GIVE


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The phenylperoxyl radical has long been accepted as a critical intermediate in the oxidation of benzene and an archetype for arylperoxyl radicals in combustion and atmospheric chemistry. Despite being central to many contemporary mechanisms underpinning these chemistries, reports of the direct detection or isolation of phenylperoxyl radicals are rare and there is little experimental evidence connecting this intermediate with expected product channels. We have prepared and isolated two charge-tagged phenyl radical models in the gas phase [i.e., 4-(N,N,N-trimethylammonium) phenyl radical cation and 4-carboxylatophenyl radical anion] and observed their reactions with dioxygen by ion-trap mass spectrometry. Measured reaction rates show good agreement with prior reports for the neutral system (k(2)[(Me3N+)C6H4 center dot + O-2] = 2.8 x 10(-11) cm(3) molecule(-1) s(-1), Phi = 4.9%; k(2)[(-O2C)C6H4 center dot + O-2] = 5.4 x 10(-1)1 cm(3) molecule(-1) s(-1), Phi = 9.2%) and the resulting mass spectra provide unequivocal evidence for the formation of phenylperoxyl radicals. Collisional activation of isolated phenylperoxyl radicals reveals unimolecular decomposition by three pathways: (i) loss of dioxygen to reform the initial phenyl radical; (ii) loss of atomic oxygen yielding a phenoxyl radical; and (iii) ejection of the formyl radical to give cyclopentadienone. Stable isotope labeling confirms these assignments. Quantum chemical calculations for both charge-tagged and neutral phenylperoxyl radicals confirm that loss of formyl radical is accessible both thermodynamically and entropically and competitive with direct loss of both hydrogen atom and carbon dioxide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper discusses the view of Franklin Miller and Robert Truog that withdrawing life-sustaining treatment causes death and so is a form of killing. I reject that view. I argue that even if we think there is no morally relevant difference between allowing a patient to die and killing her (itself a controversial view), it does not follow that allowing to die is a form of killing. I then argue that withdrawing life-sustaining treatment is properly classified as allowing the patient to die rather than as killing her. Once this is accepted, the law cannot be criticised for inconsistency by holding, as it does, that it is lawful to withdraw life-sustaining treatment but unlawful to give patients a lethal injection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter we continue the exposition of crypto topics that was begun in the previous chapter. This chapter covers secret sharing, threshold cryptography, signature schemes, and finally quantum key distribution and quantum cryptography. As in the previous chapter, we have focused only on the essentials of each topic. We have selected in the bibliography a list of representative items, which can be consulted for further details. First we give a synopsis of the topics that are discussed in this chapter. Secret sharing is concerned with the problem of how to distribute a secret among a group of participating individuals, or entities, so that only predesignated collections of individuals are able to recreate the secret by collectively combining the parts of the secret that were allocated to them. There are numerous applications of secret-sharing schemes in practice. One example of secret sharing occurs in banking. For instance, the combination to a vault may be distributed in such a way that only specified collections of employees can open the vault by pooling their portions of the combination. In this way the authority to initiate an action, e.g., the opening of a bank vault, is divided for the purposes of providing security and for added functionality, such as auditing, if required. Threshold cryptography is a relatively recently studied area of cryptography. It deals with situations where the authority to initiate or perform cryptographic operations is distributed among a group of individuals. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Signature schemes deal with the problem of generating and verifying electronic) signatures for documents.Asubclass of signature schemes is concerned with the shared-generation and the sharedverification of signatures, where a collaborating group of individuals are required to perform these actions. A new paradigm of security has recently been introduced into cryptography with the emergence of the ideas of quantum key distribution and quantum cryptography. While classical cryptography employs various mathematical techniques to restrict eavesdroppers from learning the contents of encrypted messages, in quantum cryptography the information is protected by the laws of physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Implementation of an electronic tendering (e-tendering) systems requires careful attention to the needs of the system and its various participants. Fairness in an e-tendering is of utmost importance. Current proposals and implementations do not provide fairness and thus, are vulnerable to collusion and favourism. Dishonest participants, either the principal or tenderer may collude to alter or view competing tenders which would give the favoured tenderer a greater chance of winning the contract. This paper proposes an e-tendering system that is secure and fair to all participants. We employ the techniques of anonymous token system along with signed commitment approach to achieve a publicly verifiable fair e-tendering protocol. We also provide an analysis of the protocol that confirms the security of our proposal against security goals for an e-tendering system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emission spectroscopy was used to investigate ignition and combustion characteristics of supersonic combustion ramjet engines. Two-dimensional scramjet models with inlet injection, fuelled with hydrogen gas, were used in the study. The scramjet engines were configured to operate in radical farming mode, where combustion radicals are formed behind shock waves reflected at the walls. The chemiluminescence emission signals were recorded in a two-dimensional, time-integrated fashion to give information on the location and distribution of the radical farms in the combustors. High signal levels were detected in localised regions immediately downstream of shock reflections, an indication of localised hydroxyl formation supporting the concept of radical farming. Results are presented for a symmetric as well as an asymmetric scramjet geometry. These data represent the first successful visualisation of radical farms in the hot pockets of a supersonic combustor. Spectrally resolved measurements have been obtained in the ultraviolet wavelength range between 300 and 400 nm. This data shows that the OH! chemiluminescence signal around 306nm is not the most dominant source of radiation observed in the radical farms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study focused on simulating a trajectory point towards the end of the first experimental heatshield of the FIRE II vehicle, at a total flight time of 1639.53s. Scale replicas were sized according to binary scaling and instrumented with thermocouples for testing in the X1 expansion tube, located at The University of Queensland. Correlation of flight to experimental data was achieved through the separation, and independent treatment of the heat modes. Preliminary investigation indicates that the absolute value of radiant surface flux is conserved between two binary scaled models, whereas convective heat transfer increases with the length scale. This difference in the scaling techniques result in the overall contribution of radiative heat transfer diminishing to less than 1% in expansion tubes from a flight value of approximately 9-17%. From empirical correlation's it has been shown that the St √Re number decreases, under special circumstances, in expansion tubes by the percentage radiation present on the flight vehicle. Results obtained in this study give a strong indication that the relative radiative heat transfer contribution in the expansion tube tests is less than that in flight, supporting the analysis that the absolute value remains constant with binary scaling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What industrial organisations think people do and what they actually do are often two very different things. But exactly this tension can be a source of innovation: how can we give form to insights about what people do, to deliberately challenge industries' conceptions, and inspire new product and service development

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have applied X-ray and neutron small-angle scattering techniques (SAXS, SANS, and USANS) to study the interaction between fluids and porous media in the particular case of subcritical CO2 sorption in coal. These techniques are demonstrated to give unique, pore-size-specific insights into the kinetics of CO2 sorption in a wide range of coal pores (nano to meso) and to provide data that may be used to determine the density of the sorbed CO2. We observed densification of the adsorbed CO2 by a factor up to five compared to the free fluid at the same (p, T) conditions. Our results indicate that details of CO2 sorption into coal pores differ greatly between different coals and depend on the amount of mineral matter dispersed in the coal matrix: a purely organic matrix absorbs more CO2 per unit volume than one containing mineral matter, but mineral matter markedly accelerates the sorption kinetics. Small pores are filled preferentially by the invading CO2 fluid and the apparent diffusion coefficients have been estimated to vary in the range from 5 × 10-7 cm2/min to more than 10-4 cm2/min, depending on the CO2 pressure and location on the sample.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The incidence of major storm surges in the last decade have dramatically emphasized the immense destructive capabilities of extreme water level events, particularly when driven by severe tropical cyclones. Given this risk, it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood and erosion management, engineering and for future land-use planning and to ensure the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. Australia has a long history of coastal flooding from tropical cyclones. Using a novel integration of two modeling techniques, this paper provides the first estimates of present day extreme water level exceedance probabilities around the whole coastline of Australia, and the first estimates that combine the influence of astronomical tides, storm surges generated by both extra-tropical and tropical cyclones, and seasonal and inter-annual variations in mean sea level. Initially, an analysis of tide gauge records has been used to assess the characteristics of tropical cyclone-induced surges around Australia. However, given the dearth (temporal and spatial) of information around much of the coastline, and therefore the inability of these gauge records to adequately describe the regional climatology, an observationally based stochastic tropical cyclone model has been developed to synthetically extend the tropical cyclone record to 10,000 years. Wind and pressure fields derived for these synthetically generated events have then been used to drive a hydrodynamic model of the Australian continental shelf region with annual maximum water levels extracted to estimate exceedance probabilities around the coastline. To validate this methodology, selected historic storm surge events have been simulated and resultant storm surges compared with gauge records. Tropical cyclone induced exceedance probabilities have been combined with estimates derived from a 61-year water level hindcast described in a companion paper to give a single estimate of present day extreme water level probabilities around the whole coastline of Australia. Results of this work are freely available to coastal engineers, managers and researchers via a web-based tool (www.sealevelrise.info). The described methodology could be applied to other regions of the world, like the US east coast, that are subject to both extra-tropical and tropical cyclones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An ubiquitous problem in control system design is that the system must operate subject to various constraints. Although the topic of constrained control has a long history in practice, there have been recent significant advances in the supporting theory. In this chapter, we give an introduction to constrained control. In particular, we describe contemporary work which shows that the constrained optimal control problem for discrete-time systems has an interesting geometric structure and a simple local solution. We also discuss issues associated with the output feedback solution to this class of problems, and the implication of these results in the closely related problem of anti-windup. As an application, we address the problem of rudder roll stabilization for ships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today the future is travelling rapidly towards us, shaped by all that which we have historically thrown into it. Much of what we have designed for our world over the ages, and much of what we continue to embrace in the pursuit of mainstream economic, cultural and social imperatives, embodies unacknowledged ‘time debts’. Every decision we make today has the potential to ‘give time to’, or take ‘time away’ from that future. This idea that ‘everything‘ inherently embodies ‘future time left’ is underlined by design futurist Tony Fry when he describes how we so often ‘waste’ or ‘take away’ ‘future time’. “In our endeavours to sustain ourselves in the short term we collectively act in destructive ways towards the very things we and all other beings fundamentally depend upon”

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of joint identification of infinite-frequency added mass and fluid memory models of marine structures from finite frequency data. This problem is relevant for cases where the code used to compute the hydrodynamic coefficients of the marine structure does not give the infinite-frequency added mass. This case is typical of codes based on 2D-potential theory since most 3D-potential-theory codes solve the boundary value associated with the infinite frequency. The method proposed in this paper presents a simpler alternative approach to other methods previously presented in the literature. The advantage of the proposed method is that the same identification procedure can be used to identify the fluid-memory models with or without having access to the infinite-frequency added mass coefficient. Therefore, it provides an extension that puts the two identification problems into the same framework. The method also exploits the constraints related to relative degree and low-frequency asymptotic values of the hydrodynamic coefficients derived from the physics of the problem, which are used as prior information to refine the obtained models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analogy plays a central role in legal reasoning, yet how to analogize is poorly taught and poorly practiced. We all recognize when legal analogies are being made: when a law professor suggests a difficult hypothetical in class and a student tentatively guesses at the answer based on the cases she read the night before, when an attorney advises a client to settle because a previous case goes against him, or when a judge adopts one precedent over another on the basis that it better fits the present case. However, when it comes to explaining why certain analogies are compelling, persuasive, or better than the alternative, lawyers usually draw a blank. The purpose of this article is to provide a simple model that can be used to teach and to learn how analogy actually works, and what makes one analogy superior to a competing analogy. The model is drawn from a number of theories of analogy making in cognitive science. Cognitive science is the “long-term enterprise to understand the mind scientifically.” The field studies the mechanisms that are involved in cognitive processes like thinking, memory, learning, and recall; and one of its main foci has been on how people construct analogies. The lessons from cognitive science theories of analogy can be applied to legal analogies to give students and lawyers a better understanding of this fundamental process in legal reasoning.