82 resultados para mean action time
Resumo:
Time correlation functions yield profound information about the dynamics of a physical system and hence are frequently calculated in computer simulations. For systems whose dynamics span a wide range of time, currently used methods require significant computer time and memory. In this paper, we discuss the multiple-tau correlator method for the efficient calculation of accurate time correlation functions on the fly during computer simulations. The multiple-tau correlator is efficacious in terms of computational requirements and can be tuned to the desired level of accuracy. Further, we derive estimates for the error arising from the use of the multiple-tau correlator and extend it for use in the calculation of mean-square particle displacements and dynamic structure factors. The method described here, in hardware implementation, is routinely used in light scattering experiments but has not yet found widespread use in computer simulations.
Resumo:
This paper considers PID control in terms of its implementation by means of an ARMA plant model. Two controller actions are considered, namely pole placement and deadbeat, both being applied via a PID structure for the adaptive real-time control of an industrial level system. As well as looking at two controller types separately, a comparison is made between the forms and it is shown how, under certain circumstances, the two forms can be seen to be identical. It is shown how the pole-placement PID form does not in fact realise an action which is equivalent to the deadbeat controller, when all closed-loop poles are chosen to be at the origin of the z-plane.
Resumo:
We show that an analysis of the mean and variance of discrete wavelet coefficients of coaveraged time-domain interferograms can be used as a specification for determining when to stop coaveraging. We also show that, if a prediction model built in the wavelet domain is used to determine the composition of unknown samples, a stopping criterion for the coaveraging process can be developed with respect to the uncertainty tolerated in the prediction.
Resumo:
In two separate studies, the cholesterol-lowering efficacy of a diet high in monounsaturated fatty acids (MUFA) was evaluated by means of a randomized crossover trial. In both studies subjects were randomized to receive either a high-MUFA diet or the control diet first, which they followed for a period of 8 weeks; following a washout period of 4–6 weeks they were transferred onto the opposing diet for a further period of 8 weeks. In one study subjects were healthy middle-aged men (n 30), and in the other they were young men (n 23) with a family history of CHD recruited from two centres (Guildford and Dublin). The two studies were conducted over the same time period using identical foods and study designs. Subjects consumed 38% energy as fat, with 18% energy as MUFA and 10% as saturated fatty acids (MUFA diet), or 13% energy as MUFA and 16% as saturated fatty acids (control diet). The polyunsaturated fatty acid content of each diet was 7%. The diets were achieved by providing subjects with manufactured foods such as spreads, ‘ready meals’, biscuits, puddings and breads, which, apart from their fatty acid compositions, were identical for both diets. Subjects were blind to which of the diets they were following on both arms of the study. Weight changes on the diets were less than 1 kg. In the groups combined (n 53) mean total and LDL-cholesterol levels were significantly lower at the end of the MUFA diet than the control diet by 0×29 (SD 0×61) mmol/l (P,0×001) and 0×38 (SD 0×64) mmol/l (P, 0×0001) respectively. In middle-aged men these differences were due to a mean reduction in LDL-cholesterol of ¹11 (SD 12) % on the MUFA diet with no change on the control diet (¹1×1 (SD 10) %). In young men the differences were due to an increase in LDL-cholesterol concentration on the control diet of þ6×2 (SD 13) % and a decrease on the MUFA diet of ¹7×8 (SD 20) %. Differences in the responses of middle-aged and young men to the two diets did not appear to be due to differences in their habitual baseline diets which were generally similar, but appeared to reflect the lower baseline cholesterol concentrations in the younger men. There was a moderately strong and statistically significant inverse correlation between the change in LDLcholesterol concentration on each diet and the baseline fasting LDL-cholesterol concentration (r¹0×49; P,0×0005). In conclusion, diets in which saturated fat is partially replaced by MUFA can achieve significant reductions in total and LDL-cholesterol concentrations, even when total fat and energy intakes are maintained. The dietary approach used to alter fatty acid intakes would be appropriate for achieving reductions in saturated fat intakes in whole populations.
Resumo:
A novel Neuropredictive Teleoperation (NPT) Scheme is presented. The design results from two key ideas: the exploitation of the measured or estimated neural input to the human arm or its electromyograph (EMG) as the system input and the employment of a predictor of the arm movement, based on this neural signal and an arm model, to compensate for time delays in the system. Although a multitude of such models, as well as measuring devices for the neural signals and the EMG, have been proposed, current telemanipulator research has only been considering highly simplified arm models. In the present design, the bilateral constraint that the master and slave are simultaneously compliant to each other's state (equal positions and forces) is abandoned, thus obtaining a simple to analyzesuccession of only locally controlled modules, and a robustness to time delays of up to 500 ms. The proposed designs were inspired by well established physiological evidence that the brain, rather than controlling the movement on-line, programs the arm with an action plan of a complete movement, which is then executed largely in open loop, regulated only by local reflex loops. As a model of the human arm the well-established Stark model is employed, whose mathematical representation is modified to make it suitable for an engineering application. The proposed scheme is however valid for any arm model. BIBO-stability and passivity results for a variety of local control laws are reported. Simulation results and comparisons with traditional designs also highlight the advantages of the proposed design.
Resumo:
In an adaptive equaliser, the time lag is an important parameter that significantly influences the performance. Only with the optimum time lag that corresponds to the best minimum-mean-square-error (MMSE) performance, can there be best use of the available resources. Many designs, however, choose the time lag either based on preassumption of the channel or simply based on average experience. The relation between the MMSE performance and the time lag is investigated using a new interpretation of the MMSE equaliser, and then a novel adaptive time lag algorithm is proposed based on gradient search. The proposed algorithm can converge to the optimum time lag in the mean and is verified by the numerical simulations provided.
Resumo:
A time-dependent climate-change experiment with a coupled ocean–atmosphere general circulation model has been used to study changes in the occurrence of drought in summer in southern Europe and central North America. In both regions, precipitation and soil moisture are reduced in a climate of greater atmospheric carbon dioxide. A detailed investigation of the hydrology of the model shows that the drying of the soil comes about through an increase in evaporation in winter and spring, caused by higher temperatures and reduced snow cover, and a decrease in the net input of water in summer. Evaporation is reduced in summer because of the drier soil, but the reduction in precipitation is larger. Three extreme statistics are used to define drought, namely the frequency of low summer precipitation, the occurrence of long dry spells, and the probability of dry soil. The last of these is arguably of the greatest practical importance, but since it is based on soil moisture, of which there are very few observations, the authors’ simulation of it has the least confidence. Furthermore, long time series for daily observed precipitation are not readily available from a sufficient number of stations to enable a thorough evaluation of the model simulation, especially for the frequency of long dry spells, and this increases the systematic uncertainty of the model predictions. All three drought statistics show marked increases owing to the sensitivity of extreme statistics to changes in their distributions. However, the greater likelihood of long dry spells is caused by a tendency in the character of daily rainfall toward fewer events, rather than by the reduction in mean precipitation. The results should not be taken as firm predictions because extreme statistics for small regions cannot be calculated reliably from the output of the current generation of GCMs, but they point to the possibility of large increases in the severity of drought conditions as a consequence of climate change caused by increased CO2.
Resumo:
Synoptic-scale air flow variability over the United Kingdom is measured on a daily time scale by following previous work to define 3 indices: geostrophic flow strength, vorticity and direction. Comparing the observed distribution of air flow index values with those determined from a simulation with the Hadley Centre’s global climate model (HadCM2) identifies some minor systematic biases in the model’s synoptic circulation but demonstrates that the major features are well simulated. The relationship between temperature and precipitation from parts of the United Kingdom and these air flow indices (either singly or in pairs) is found to be very similar in both the observations and model output; indeed the simulated and observed precipitation relationships are found to be almost interchangeable in a quantitative sense. These encouraging results imply that some reliability can be assumed for single grid-box and regional output from this climate model; this applies only to those grid boxes evaluated here (which do not have high or complex orography), only to the portion of variability that is controlled by synoptic air flow variations, and only to those surface variables considered here (temperature and precipitation).
Resumo:
Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.
Resumo:
Presents a method for model based bilateral control of master-slave arm with time delay between master and slave arms, where the system supports cooperative action between manual and automatic modes. The method realises efficiencies in master-slave arm control with the simplicities of a computer and the flexibility of a skilled human operator.
Resumo:
PV only generates electricity during daylight hours and primarily generates over summer. In the UK, the carbon intensity of grid electricity is higher during the daytime and over winter. This work investigates whether the grid electricity displaced by PV is high or low carbon compared to the annual mean carbon intensity using carbon factors at higher temporal resolutions (half-hourly and daily). UK policy for carbon reporting requires savings to be calculated using the annual mean carbon intensity of grid electricity. This work offers an insight into whether this technique is appropriate. Using half hourly data on the generating plant supplying the grid from November 2008 to May 2010, carbon factors for grid electricity at half-hourly and daily resolution have been derived using technology specific generation emission factors. Applying these factors to generation data from PV systems installed on schools, it is possible to assess the variation in the carbon savings from displacing grid electricity with PV generation using carbon factors with different time resolutions. The data has been analyzed for a period of 363 to 370 days and so cannot account for inter-year variations in the relationship between PV generation and carbon intensity of the electricity grid. This analysis suggests that PV displaces more carbon intensive electricity using half-hourly carbon factors than using daily factors but less compared with annual ones. A similar methodology could provide useful insights on other variable renewable and demand-side technologies and in other countries where PV performance and grid behavior are different.
Resumo:
Existing research on synchronous remote working in CSCW has highlighted the troubles that can arise because actions at one site are (partially) unavailable to remote colleagues. Such ‘local action’ is routinely characterised as a nuisance, a distraction, subordinate and the like. This paper explores interconnections between ‘local action’ and ‘distributed work’ in the case of a research team virtually collocated through ‘MiMeG’. MiMeG is an e-Social Science tool that facilitates ‘distributed data sessions’ in which social scientists are able to remotely collaborate on the real-time analysis of video data. The data are visible and controllable in a shared workspace and participants are additionally connected via audio conferencing. The findings reveal that whilst the (partial) unavailability of local action is at times problematic, it is also used as a resource for coordinating work. The paper considers how local action is interactionally managed in distributed data sessions and concludes by outlining implications of the analysis for the design and study of technologies to support group-to-group collaboration.
Resumo:
The family of theories dubbed ‘luck egalitarianism’ represent an attempt to infuse egalitarian thinking with a concern for personal responsibility, arguing that inequalities are just when they result from, or the extent to which they result from, choice, but are unjust when they result from, or the extent to which they result from, luck. In this essay I argue that luck egalitarians should sometimes seek to limit inequalities, even when they have a fully choice-based pedigree (i.e., result only from the choices of agents). I grant that the broad approach is correct but argue that the temporal standpoint from which we judge whether the person can be held responsible, or the extent to which they can be held responsible, should be radically altered. Instead of asking, as Standard (or Static) Luck Egalitarianism seems to, whether or not, or to what extent, a person was responsible for the choice at the time of choosing, and asking the question of responsibility only once, we should ask whether, or to what extent, they are responsible for the choice at the point at which we are seeking to discover whether, or to what extent, the inequality is just, and so the question of responsibility is not settled but constantly under review. Such an approach will differ from Standard Luck Egalitarianism only if responsibility for a choice is not set in stone – if responsibility can weaken then we should not see the boundary between luck and responsibility within a particular action as static. Drawing on Derek Parfit’s illuminating discussions of personal identity, and contemporary literature on moral responsibility, I suggest there are good reasons to think that responsibility can weaken – that we are not necessarily fully responsible for a choice for ever, even if we were fully responsible at the time of choosing. I call the variant of luck egalitarianism that recognises this shift in temporal standpoint and that responsibility can weaken Dynamic Luck Egalitarianism (DLE). In conclusion I offer a preliminary discussion of what kind of policies DLE would support.
Resumo:
The interactions between shear-free turbulence in two regions (denoted as + and − on either side of a nearly flat horizontal interface are shown here to be controlled by several mechanisms, which depend on the magnitudes of the ratios of the densities, ρ+/ρ−, and kinematic viscosities of the fluids, μ+/μ−, and the root mean square (r.m.s.) velocities of the turbulence, u0+/u0−, above and below the interface. This study focuses on gas–liquid interfaces so that ρ+/ρ− ≪ 1 and also on where turbulence is generated either above or below the interface so that u0+/u0− is either very large or very small. It is assumed that vertical buoyancy forces across the interface are much larger than internal forces so that the interface is nearly flat, and coupling between turbulence on either side of the interface is determined by viscous stresses. A formal linearized rapid-distortion analysis with viscous effects is developed by extending the previous study by Hunt & Graham (J. Fluid Mech., vol. 84, 1978, pp. 209–235) of shear-free turbulence near rigid plane boundaries. The physical processes accounted for in our model include both the blocking effect of the interface on normal components of the turbulence and the viscous coupling of the horizontal field across thin interfacial viscous boundary layers. The horizontal divergence in the perturbation velocity field in the viscous layer drives weak inviscid irrotational velocity fluctuations outside the viscous boundary layers in a mechanism analogous to Ekman pumping. The analysis shows the following. (i) The blocking effects are similar to those near rigid boundaries on each side of the interface, but through the action of the thin viscous layers above and below the interface, the horizontal and vertical velocity components differ from those near a rigid surface and are correlated or anti-correlated respectively. (ii) Because of the growth of the viscous layers on either side of the interface, the ratio uI/u0, where uI is the r.m.s. of the interfacial velocity fluctuations and u0 the r.m.s. of the homogeneous turbulence far from the interface, does not vary with time. If the turbulence is driven in the lower layer with ρ+/ρ− ≪ 1 and u0+/u0− ≪ 1, then uI/u0− ~ 1 when Re (=u0−L−/ν−) ≫ 1 and R = (ρ−/ρ+)(v−/v+)1/2 ≫ 1. If the turbulence is driven in the upper layer with ρ+/ρ− ≪ 1 and u0+/u0− ≫ 1, then uI/u0+ ~ 1/(1 + R). (iii) Nonlinear effects become significant over periods greater than Lagrangian time scales. When turbulence is generated in the lower layer, and the Reynolds number is high enough, motions in the upper viscous layer are turbulent. The horizontal vorticity tends to decrease, and the vertical vorticity of the eddies dominates their asymptotic structure. When turbulence is generated in the upper layer, and the Reynolds number is less than about 106–107, the fluctuations in the viscous layer do not become turbulent. Nonlinear processes at the interface increase the ratio uI/u0+ for sheared or shear-free turbulence in the gas above its linear value of uI/u0+ ~ 1/(1 + R) to (ρ+/ρ−)1/2 ~ 1/30 for air–water interfaces. This estimate agrees with the direct numerical simulation results from Lombardi, De Angelis & Bannerjee (Phys. Fluids, vol. 8, no. 6, 1996, pp. 1643–1665). Because the linear viscous–inertial coupling mechanism is still significant, the eddy motions on either side of the interface have a similar horizontal structure, although their vertical structure differs.
Resumo:
The “case for property” in the mixed-asset portfolio is a topic of continuing interest to practitioners and academics. Such an analysis typically is performed over a fixed period of time and the optimum allocation to property inferred from the weight assigned to property through the use of mean-variance analysis. It is well known, however, that the parameters used in the portfolio analysis problem are unstable through time. Thus, the weight proposed for property in one period is unlikely to be that found in another. Consequently, in order to assess the case for property more thoroughly, the impact of property in the mixed-asset portfolio is evaluated on a rolling basis over a long period of time. In this way we test whether the inclusion of property significantly improves the performance of an existing equity/bond portfolio all of the time. The main findings are that the inclusion of direct property into an existing equity/bond portfolio leads to increase or decreases in return, depending on the relative performance of property compared with the other asset classes. However, including property in the mixed-asset portfolio always leads to reductions in portfolio risk. Consequently, adding property into an equity/bond portfolio can lead to significant increases in risk-adjusted performance. Thus, if the decision to include direct property in the mixed-asset portfolio is based upon its diversification benefits the answer is yes, there is a “case for property” all the time!