145 resultados para Drunkenness when driving
Resumo:
In order to fabricate a biomimetic skin for an octopus inspired robot, a new process was developed based on mechanical properties measured from real octopus skin. Various knitted nylon textiles were tested and the one of 10-denier nylon was chosen as reinforcement. A combination of Ecoflex 0030 and 0010 silicone rubbers was used as matrix of the composite to obtain the right stiffness for the skin-analogue system. The open mould fabrication process developed allows air bubble to escape easily and the artificial skin produced was thin and waterproof. Material properties of the biomimetic skin were characterised using static tensile and instrumented scissors cutting tests. The Young’s moduli of the artificial skin are 0.08 MPa and 0.13 MPa in the longitudinal and transverse directions, which are much lower than those of the octopus skin. The strength and fracture toughness of the artificial skin, on the other hand are higher than those of real octopus skins. Conically-shaped skin prototypes to be used to cover the robotic arm unit were manufactured and tested. The biomimetic skin prototype was stiff enough to maintain it conical shape when filled with water. The driving force for elongation was reduced significantly compared with previous prototypes.
Resumo:
The aim of this study was to investigate the effect of atmospheric frying followed by drainage under vacuum on the stability of oil, compared to similar frying with drainage at atmospheric pressure. Changes in the oil were assessed by the free fatty acid (FFA) content, p-anisidine value (AnV), colour, viscosity, fatty acid profile and concentration of tocols. The rate of FFA formation in the case of vacuum drainage was found to be about half that of atmospheric drainage. Oil deterioration by oxidation and polymerisation was also reduced by the use of vacuum drainage. The AnV of the oil after vacuum drainage was lower by about 12%, the total colour difference was improved by 14% and viscosity was slightly reduced after 5 days of frying, compared to the values for oil that had been drained at atmospheric pressure. There was a reduction in the loss of polyunsaturated fatty acids in the case of vacuum drainage after 5 days of frying but differences in retention of tocols were only evident in the first two days of frying.
Resumo:
Consent's capacity to legitimise actions and claims is limited by conditions such as coercion, which render consent ineffective. A better understanding of the limits to consent's capacity to legitimise can shed light on a variety of applied debates, in political philosophy, bioethics, economics and law. I show that traditional paternalist explanations for limits to consent's capacity to legitimise cannot explain the central intuition that consent is often rendered ineffective when brought about by a rights violation or threatened rights violation. I argue that this intuition is an expression of the same principles of corrective justice that underlie norms of compensation and rectification. I show how these principles can explain and clarify core intuitions about conditions which render consent ineffective, including those concerned with the consenting agent's option set, his mental competence, and available information.
Resumo:
Organizations require effective service management in order to meet business service levels and reduce costs in the operation of information systems. There is a growing body of knowledge that describes the rationale and the outcome of these experiences. These cases indicate that the capabilities and processes of the organization are important factors in achieving success. Our review of the literature considers both the hard and soft factors such as service processes and trust in service partners. These factors are explored through a longitudinal case study designed to provide insights into how the environment sets the parameters for service management. The selected case analyses the organization changes to its service management approaches during a period of several years. Results are discussed from both practitioner and theoretical viewpoints with proposals for further research.
Resumo:
If an export subsidy is efficient, that is, has a surplus-transfer role, then there exists an implicit function relating the optimal level of the subsidy to the income target in the agricultural sector. If an export subsidy is inefficient no such function exists. We show that dependence exists in large-export equilibrium, not in small-export equilibrium and show that these results remain robust to concerns about domestic tax distortions. The failure of previous work to produce this result stems from its neglect of the income constraint on producer surplus in the programming problem transferring surplusfrom consumersand taxpayers to farmers.
Resumo:
Our differences are three. The first arises from the belief that "... a nonzero value for the optimally chosen policy instrument implies that the instrument is efficient for redistribution" (Alston, Smith, and Vercammen, p. 543, paragraph 3). Consider the two equations: (1) o* = f(P3) and (2) = -f(3) ++r h* (a, P3) representing the solution to the problem of maximizing weighted, Marshallian surplus using, simultaneously, a per-unit border intervention, 9, and a per-unit domestic intervention, wr. In the solution, parameter ot denotes the weight applied to producer surplus; parameter p denotes the weight applied to government revenues; consumer surplus is implicitly weighted one; and the country in question is small in the sense that it is unable to affect world price by any of its domestic adjustments (see the Appendix). Details of the forms of the functions f((P) and h(ot, p) are easily derived, but what matters in the context of Alston, Smith, and Vercammen's Comment is: Redistributivep referencest hatf avorp roducers are consistent with higher values "alpha," and whereas the optimal domestic intervention, 7r*, has both "alpha and beta effects," the optimal border intervention, r*, has only a "beta effect,"-it does not have a redistributional role. Garth Holloway is reader in agricultural economics and statistics, Department of Agricultural and Food Economics, School of Agriculture, Policy, and Development, University of Reading. The author is very grateful to Xavier Irz, Bhavani Shankar, Chittur Srinivasan, Colin Thirtle, and Richard Tiffin for their comments and their wisdom; and to Mario Mazzochi, Marinos Tsigas, and Cal Turvey for their scholarship, including help in tracking down a fairly complete collection of the papers that cite Alston and Hurd. They are not responsible for any errors or omissions. Note, in equation (1), that the border intervention is positive whenever a distortion exists because 8 > 0 implies 3 - 1 + 8 > 1 and, thus, f((P) > 0 (see Appendix). Using Alston, Smith, and Vercammen's definition, the instrument is now "efficient," and therefore has a redistributive role. But now, suppose that the distortion is removed so that 3 - 1 + 8 = 1, 8 = 0, and consequently the border intervention is zero. According to Alston, Smith, and Vercammen, the instrument is now "inefficient" and has no redistributive role. The reader will note that this thought experiment has said nothing about supporting farm incomes, and so has nothing whatsoever to do with efficient redistribution. Of course, the definition is false. It follows that a domestic distortion arising from the "excess-burden argument" 3 = 1 + 8, 8 > 0 does not make an export subsidy "efficient." The export subsidy, having only a "beta effect," does not have a redistributional role. The second disagreement emerges from the comment that Holloway "... uses an idiosyncratic definition of the relevant objective function of the government (Alston, Smith, and Vercammen, p. 543, paragraph 2)." The objective function that generates equations (1) and (2) (see the Appendix) is the same as the objective function used by Gardner (1995) when he first questioned Alston, Carter, and Smith's claim that a "domestic distortion can make a border intervention efficient in transferring surplus from consumers and taxpayers to farmers." The objective function used by Gardner (1995) is the same objective function used in the contributions that precede it and thus defines the literature on the debate about borderversus- domestic intervention (Streeten; Yeh; Paarlberg 1984, 1985; Orden; Gardner 1985). The objective function in the latter literature is the same as the one implied in another literature that originates from Wallace and includes most notably Gardner (1983), but also Alston and Hurd. Amer. J. Agr. Econ. 86(2) (May 2004): 549-552 Copyright 2004 American Agricultural Economics Association This content downloaded on Tue, 15 Jan 2013 07:58:41 AM All use subject to JSTOR Terms and Conditions 550 May 2004 Amer. J. Agr. Econ. The objective function in Holloway is this same objective function-it is, of course, Marshallian surplus.1 The third disagreement concerns scholarship. The Comment does not seem to be cognizant of several important papers, especially Bhagwati and Ramaswami, and Bhagwati, both of which precede Corden (1974, 1997); but also Lipsey and Lancaster, and Moschini and Sckokai; one important aspect of Alston and Hurd; and one extremely important result in Holloway. This oversight has some unfortunate repercussions. First, it misdirects to the wrong origins of intellectual property. Second, it misleads about the appropriateness of some welfare calculations. Third, it prevents Alston, Smith, and Vercammen from linking a finding in Holloway (pp. 242-43) with an old theorem (Lipsey and Lancaster) that settles the controversy (Alston, Carter, and Smith 1993, 1995; Gardner 1995; and, presently, Alston, Smith, and Vercammen) about the efficiency of border intervention in the presence of domestic distortions.
Resumo:
The development of effective environmental management plans and policies requires a sound understanding of the driving forces involved in shaping and altering the structure and function of ecosystems. However, driving forces, especially anthropogenic ones, are defined and operate at multiple administrative levels, which do not always match ecological scales. This paper presents an innovative methodology of analysing drivers of change by developing a typology of scale sensitivity of drivers that classifies and describes the way they operate across multiple administrative levels. Scale sensitivity varies considerably among drivers, which can be classified into five broad categories depending on the response of ‘evenness’ and ‘intensity change’ when moving across administrative levels. Indirect drivers tend to show low scale sensitivity, whereas direct drivers show high scale sensitivity, as they operate in a non-linear way across the administrative scale. Thus policies addressing direct drivers of change, in particular, need to take scale into consideration during their formulation. Moreover, such policies must have a strong spatial focus, which can be achieved either by encouraging local–regional policy making or by introducing high flexibility in (inter)national policies to accommodate increased differentiation at lower administrative levels. High quality data is available for several drivers, however, the availability of consistent data at all levels for non-anthropogenic drivers is a major constraint to mapping and assessing their scale sensitivity. This lack of data may hinder effective policy making for environmental management, since it restricts the ability to fully account for scale sensitivity of natural drivers in policy design.
Resumo:
To understand whether genotypic variation in root-associated phosphatase activities in wheat impacts on its ability to acquire phosphorus (P), various phosphatase activities of roots were measured in relation to the utilization of organic P substrates in agar, and the P-nutrition of plants was investigated in a range of soils. Root-associated phosphatase activities of plants grown in hydroponics were measured against different organic P substrates. Representative genotypes were then grown in both agar culture and in soils with differing organic P contents and plant biomass and P uptake were determined. Differences in the activities of both root-associated and exuded phosphodiesterase and phosphomonoesterase were observed, and were related to the P content of plants supplied with either ribonucleic acid or glucose 6-phosphate, respectively, as the sole form of P. When the cereal lines were grown in different soils, however, there was little relationship between any root-associated phosphatase activity and plant P uptake. This indicates that despite differences in phosphatase activities of cereal roots, such variability appears to play no significant role in the P-nutrition of the plant grown in soil, and that any benefit derived from the hydrolysis of soil organic P is common to all genotypes.
Resumo:
The evolution of stratospheric ozone from 1960 to 2100 is examined in simulations from 14 chemistry‐climate models, driven by prescribed levels of halogens and greenhouse gases. There is general agreement among the models that total column ozone reached a minimum around year 2000 at all latitudes, projected to be followed by an increase over the first half of the 21st century. In the second half of the 21st century, ozone is projected to continue increasing, level off, or even decrease depending on the latitude. Separation into partial columns above and below 20 hPa reveals that these latitudinal differences are almost completely caused by differences in the model projections of ozone in the lower stratosphere. At all latitudes, upper stratospheric ozone increases throughout the 21st century and is projected to return to 1960 levels well before the end of the century, although there is a spread among models in the dates that ozone returns to specific historical values. We find decreasing halogens and declining upper atmospheric temperatures, driven by increasing greenhouse gases, contribute almost equally to increases in upper stratospheric ozone. In the tropical lower stratosphere, an increase in upwelling causes a steady decrease in ozone through the 21st century, and total column ozone does not return to 1960 levels in most of the models. In contrast, lower stratospheric and total column ozone in middle and high latitudes increases during the 21st century, returning to 1960 levels well before the end of the century in most models.
Resumo:
Planning is one of the key problems for autonomous vehicles operating in road scenarios. Present planning algorithms operate with the assumption that traffic is organised in predefined speed lanes, which makes it impossible to allow autonomous vehicles in countries with unorganised traffic. Unorganised traffic is though capable of higher traffic bandwidths when constituting vehicles vary in their speed capabilities and sizes. Diverse vehicles in an unorganised exhibit unique driving behaviours which are analysed in this paper by a simulation study. The aim of the work reported here is to create a planning algorithm for mixed traffic consisting of both autonomous and non-autonomous vehicles without any inter-vehicle communication. The awareness (e.g. vision) of every vehicle is restricted to nearby vehicles only and a straight infinite road is assumed for decision making regarding navigation in the presence of multiple vehicles. Exhibited behaviours include obstacle avoidance, overtaking, giving way for vehicles to overtake from behind, vehicle following, adjusting the lateral lane position and so on. A conflict of plans is a major issue which will almost certainly arise in the absence of inter-vehicle communication. Hence each vehicle needs to continuously track other vehicles and rectify plans whenever a collision seems likely. Further it is observed here that driver aggression plays a vital role in overall traffic dynamics, hence this has also been factored in accordingly. This work is hence a step forward towards achieving autonomous vehicles in unorganised traffic, while similar effort would be required for planning problems such as intersections, mergers, diversions and other modules like localisation.
Resumo:
Context: Emotion regulation is critically disrupted in depression and use of paradigms tapping these processes may uncover essential changes in neurobiology during treatment. In addition, as neuroimaging outcome studies of depression commonly utilize solely baseline and endpoint data – which is more prone to week-to week noise in symptomatology – we sought to use all data points over the course of a six month trial. Objective: To examine changes in neurobiology resulting from successful treatment. Design: Double-blind trial examining changes in the neural circuits involved in emotion regulation resulting from one of two antidepressant treatments over a six month trial. Participants were scanned pretreatment, at 2 months and 6 months posttreatment. Setting: University functional magnetic resonance imaging facility. Participants: 21 patients with Major Depressive Disorder and without other Axis I or Axis II diagnoses and 14 healthy controls. Interventions: Venlafaxine XR (doses up to 300mg) or Fluoxetine (doses up to 80mg). Main Outcome Measure: Neural activity, as measured using functional magnetic resonance imaging during performance of an emotion regulation paradigm as well as regular assessments of symptom severity by the Hamilton Rating Scale for Depression. To utilize all data points, slope trajectories were calculated for rate of change in depression severity as well as rate of change of neural engagement. Results: Those depressed individuals showing the steepest decrease in depression severity over the six months were those individuals showing the most rapid increases in BA10 and right DLPFC activity when regulating negative affect over the same time frame. This relationship was more robust than when using solely the baseline and endpoint data. Conclusions: Changes in PFC engagement when regulating negative affect correlate with changes in depression severity over six months. These results are buttressed by calculating these statistics which are more reliable and robust to week-to-week variation than difference scores.
Resumo:
Many studies have reported long-range synchronization of neuronal activity between brain areas, in particular in the beta and gamma bands with frequencies in the range of 14–30 and 40–80 Hz, respectively. Several studies have reported synchrony with zero phase lag, which is remarkable considering the synaptic and conduction delays inherent in the connections between distant brain areas. This result has led to many speculations about the possible functional role of zero-lag synchrony, such as for neuronal communication, attention, memory, and feature binding. However, recent studies using recordings of single-unit activity and local field potentials report that neuronal synchronization may occur with non-zero phase lags. This raises the questions whether zero-lag synchrony can occur in the brain and, if so, under which conditions. We used analytical methods and computer simulations to investigate which connectivity between neuronal populations allows or prohibits zero-lag synchrony. We did so for a model where two oscillators interact via a relay oscillator. Analytical results and computer simulations were obtained for both type I Mirollo–Strogatz neurons and type II Hodgkin–Huxley neurons. We have investigated the dynamics of the model for various types of synaptic coupling and importantly considered the potential impact of Spike-Timing Dependent Plasticity (STDP) and its learning window. We confirm previous results that zero-lag synchrony can be achieved in this configuration. This is much easier to achieve with Hodgkin–Huxley neurons, which have a biphasic phase response curve, than for type I neurons. STDP facilitates zero-lag synchrony as it adjusts the synaptic strengths such that zero-lag synchrony is feasible for a much larger range of parameters than without STDP.
Resumo:
This study focuses on the analysis of winter (October-November-December-January-February-March; ONDJFM) storm events and their changes due to increased anthropogenic greenhouse gas concentrations over Europe. In order to assess uncertainties that are due to model formulation, 4 regional climate models (RCMs) with 5 high resolution experiments, and 4 global general circulation models (GCMs) are considered. Firstly, cyclone systems as synoptic scale processes in winter are investigated, as they are a principal cause of the occurrence of extreme, damage-causing wind speeds. This is achieved by use of an objective cyclone identification and tracking algorithm applied to GCMs. Secondly, changes in extreme near-surface wind speeds are analysed. Based on percentile thresholds, the studied extreme wind speed indices allow a consistent analysis over Europe that takes systematic deviations of the models into account. Relative changes in both intensity and frequency of extreme winds and their related uncertainties are assessed and related to changing patterns of extreme cyclones. A common feature of all investigated GCMs is a reduced track density over central Europe under climate change conditions, if all systems are considered. If only extreme (i.e. the strongest 5%) cyclones are taken into account, an increasing cyclone activity for western parts of central Europe is apparent; however, the climate change signal reveals a reduced spatial coherency when compared to all systems, which exposes partially contrary results. With respect to extreme wind speeds, significant positive changes in intensity and frequency are obtained over at least 3 and 20% of the European domain under study (35–72°N and 15°W–43°E), respectively. Location and extension of the affected areas (up to 60 and 50% of the domain for intensity and frequency, respectively), as well as levels of changes (up to +15 and +200% for intensity and frequency, respectively) are shown to be highly dependent on the driving GCM, whereas differences between RCMs when driven by the same GCM are relatively small.
Resumo:
A version of the Canadian Middle Atmosphere Model (CMAM) that is nudged toward reanalysis data up to 1 hPa is used to examine the impacts of parameterized orographic and non-orographic gravity wave drag (OGWD and NGWD) on the zonal-mean circulation of the mesosphere during the extended northern winters of 2006 and 2009 when there were two large stratospheric sudden warmings. The simulations are compared to Aura Microwave Limb Sounder (MLS) observations of mesospheric temperature, carbon monoxide (CO) and derived zonal winds. The control simulation, which uses both OGWD and NGWD, is shown to be in good agreement with MLS. The impacts of OGWD and NGWD are assessed using simulations in which those sources of wave drag are removed. In the absence of OGWD the mesospheric zonal winds in the months preceding the warmings are too strong, causing increased mesospheric NGWD, which drives excessive downwelling, resulting in overly large lower mesospheric values of CO prior to the warming. NGWD is found to be most important following the warmings when the underlying westerlies are too weak to allow much vertical propagation of the orographic gravity waves to the mesosphere. NGWD is primarily responsible for driving the circulation that results in the descent of CO from the thermosphere following the warmings. Zonal mean mesospheric winds and temperatures in all simulations are shown to be strongly constrained by (i.e. slaved to) the stratosphere. Finally, it is demonstrated that the responses to OGWD and NGWD are non-additive due to their dependence and influence on the background winds and temperatures.