906 resultados para Generalized mean
Resumo:
The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical applications.
Resumo:
This article addresses the problem of estimating the Quality of Service (QoS) of a composite service given the QoS of the services participating in the composition. Previous solutions to this problem impose restrictions on the topology of the orchestration models, limiting their applicability to well-structured orchestration models for example. This article lifts these restrictions by proposing a method for aggregate QoS computation that deals with more general types of unstructured orchestration models. The applicability and scalability of the proposed method are validated using a collection of models from industrial practice.
Resumo:
In the finite element modelling of steel frames, external loads usually act along the members rather than at the nodes only. Conventionally, when a member is subjected to these transverse loads, they are converted to nodal forces which act at the ends of the elements into which the member is discretised by either lumping or consistent nodal load approaches. For a contemporary geometrically non-linear analysis in which the axial force in the member is large, accurate solutions are achieved by discretising the member into many elements, which can produce unfavourable consequences on the efficacy of the method for analysing large steel frames. Herein, a numerical technique to include the transverse loading in the non-linear stiffness formulation for a single element is proposed, and which is able to predict the structural responses of steel frames involving the effects of first-order member loads as well as the second-order coupling effect between the transverse load and the axial force in the member. This allows for a minimal discretisation of a frame for second-order analysis. For those conventional analyses which do include transverse member loading, prescribed stiffness matrices must be used for the plethora of specific loading patterns encountered. This paper shows, however, that the principle of superposition can be applied to the equilibrium condition, so that the form of the stiffness matrix remains unchanged with only the magnitude of the loading being needed to be changed in the stiffness formulation. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. The results are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple structural frames.
Resumo:
This paper seeks to explain how the selective securitization of infectious disease arose, and to analyze the policy successes from this move. It is argued that despite some success, such as the revised International Health Regulations (IHR) in 2005, there remain serious deficiencies in the political outputs from the securitization of infectious disease.
Resumo:
In this article we study the azimuthal shear deformations in a compressible Isotropic elastic material. This class of deformations involves an azimuthal displacement as a function of the radial and axial coordinates. The equilibrium equations are formulated in terms of the Cauchy-Green strain tensors, which form an overdetermined system of partial differential equations for which solutions do not exist in general. By means of a Legendre transformation, necessary and sufficient conditions for the material to support this deformation are obtained explicitly, in the sense that every solution to the azimuthal equilibrium equation will satisfy the remaining two equations. Additionally, we show how these conditions are sufficient to support all currently known deformations that locally reduce to simple shear. These conditions are then expressed both in terms of the invariants of the Cauchy-Green strain and stretch tensors. Several classes of strain energy functions for which this deformation can be supported are studied. For certain boundary conditions, exact solutions to the equilibrium equations are obtained. © 2005 Society for Industrial and Applied Mathematics.
Resumo:
We consider a discrete agent-based model on a one-dimensional lattice, where each agent occupies L sites and attempts movements over a distance of d lattice sites. Agents obey a strict simple exclusion rule. A discrete-time master equation is derived using a mean-field approximation and careful probability arguments. In the continuum limit, nonlinear diffusion equations that describe the average agent occupancy are obtained. Averaged discrete simulation data are generated and shown to compare very well with the solution to the derived nonlinear diffusion equations. This framework allows us to approach a lattice-free result using all the advantages of lattice methods. Since different cell types have different shapes and speeds of movement, this work offers insight into population-level behavior of collective cellular motion.
Resumo:
In this paper I propose that identity is momentary, fluid, and multiple while simultaneously providing us with a sense of sameness and continuity. Building on Valsiner’s ideas about human sense-making I suggest that we can reasonably deal with the multiplicity/unity paradox if we conceive of this process as resulting in the construction of a fuzzy field of hyper-generalized personal sense, which ordinarily functions as an implicit and unspeakable background of our everyday functioning, while being constantly re-created through momentary instances of foregrounded and explicit identity-dialogues. I illustrate the ideas put forward in the paper by analysing a case of a young woman experiencing a change in her being. Finally, in an attempt to illustrate and further develop the case I introduce a metaphor of carpet-weaving as an apposite image for thinking about identity as a process of a multiple and fragmented, yet also a united and constant being.
Resumo:
Aim Large-scale patterns linking energy availability, biological productivity and diversity form a central focus of ecology. Despite evidence that the activity and abundance of animals may be limited by climatic variables associated with regional biological productivity (e.g. mean annual precipitation and annual actual evapotranspiration), it is unclear whether plant–granivore interactions are themselves influenced by these climatic factors across broad spatial extents. We evaluated whether climatic conditions that are known to alter the abundance and activity of granivorous animals also affect rates of seed removal. Location Eleven sites across temperate North America. Methods We used a common protocol to assess the removal of the same seed species (Avena sativa) over a 2-day period. Model selection via the Akaike information criterion was used to determine a set of candidate binomial generalized linear mixed models that evaluated the relationship between local climatic data and post-dispersal seed predation. Results Annual actual evapotranspiration was the single best predictor of the proportion of seeds removed. Annual actual evapotranspiration and mean annual precipitation were both positively related to mean seed removal and were included in four and three of the top five models, respectively. Annual temperature range was also positively related to seed removal and was an explanatory variable in three of the top four models. Main conclusions Our work provides the first evidence that energy and precipitation, which are known to affect consumer abundance and activity, also translate to strong, predictable patterns of seed predation across a continent. More generally, these findings suggest that future changes in temperature and precipitation could have widespread consequences for plant species composition in grasslands, through impacts on plant recruitment.
Resumo:
Objective. This study investigated cognitive functioning among older adults with physical debility not attributable to an acute injury or neurological condition who were receiving subacute inpatient physical rehabilitation. Design. A cohort investigation with assessments at admission and discharge. Setting. Three geriatric rehabilitation hospital wards. Participants. Consecutive rehabilitation admissions () following acute hospitalization (study criteria excluded orthopaedic, neurological, or amputation admissions). Intervention. Usual rehabilitation care. Measurements. The Functional Independence Measure (FIM) Cognitive and Motor items. Results. A total of 704 (86.5%) participants (mean age = 76.5 years) completed both assessments. Significant improvement in FIM Cognitive items (-score range 3.93–8.74, all ) and FIM Cognitive total score (-score = 9.12, ) occurred, in addition to improvement in FIM Motor performance. A moderate positive correlation existed between change in Motor and Cognitive scores (Spearman’s rho = 0.41). Generalized linear modelling indicated that better cognition at admission (coefficient = 0.398, ) and younger age (coefficient = −0.280, ) were predictive of improvement in Motor performance. Younger age (coefficient = −0.049, ) was predictive of improvement in FIM Cognitive score. Conclusions. Improvement in cognitive functioning was observed in addition to motor function improvement among this population. Causal links cannot be drawn without further research.
Resumo:
The estimation of the critical gap has been an issue since the 1970s, when gap acceptance was introduced to evaluate the capacity of unsignalized intersections. The critical gap is the shortest gap that a driver is assumed to accept. A driver’s critical gap cannot be measured directly and a number of techniques have been developed to estimate the mean critical gaps of a sample of drivers. This paper reviews the ability of the Maximum Likelihood technique and the Probability Equilibrium Method to predict the mean and standard deviation of the critical gap with a simulation of 100 drivers, repeated 100 times for each flow condition. The Maximum Likelihood method gave consistent and unbiased estimates of the mean critical gap. Whereas the probability equilibrium method had a significant bias that was dependent on the flow in the priority stream. Both methods were reasonably consistent, although the Maximum Likelihood Method was slightly better. If drivers are inconsistent, then again the Maximum Likelihood method is superior. A criticism levelled at the Maximum Likelihood method is that a distribution of the critical gap has to be assumed. It was shown that this does not significantly affect its ability to predict the mean and standard deviation of the critical gaps. Finally, the Maximum Likelihood method can predict reasonable estimates with observations for 25 to 30 drivers. A spreadsheet procedure for using the Maximum Likelihood method is provided in this paper. The PEM can be improved if the maximum rejected gap is used.
Resumo:
The dynamic nature of tissue temperature and the subcutaneous properties, such as blood flow, fatness, and metabolic rate, leads to variation in local skin temperature. Therefore, we investigated the effects of using multiple regions of interest when calculating weighted mean skin temperature from four local sites. Twenty-six healthy males completed a single trial in a thermonetural laboratory (mean ± SD): 24.0 (1.2) °C; 56 (8%) relative humidity; < 0.1 m/s air speed). Mean skin temperature was calculated from four local sites (neck, scapula, hand and shin) in accordance with International Standards using digital infrared thermography. A 50 x 50 mm square, defined by strips of aluminium tape, created six unique regions of interest, top left quadrant, top right quadrant, bottom left quadrant, bottom right quadrant, centre quadrant and the entire region of interest, at each of the local sites. The largest potential error in weighted mean skin temperature was calculated using a combination of a) the coolest and b) the warmest regions of interest at each of the local sites. Significant differences between the six regions interest were observed at the neck (P < 0.01), scapula (P < 0.001) and shin (P < 0.05); but not at the hand (P = 0.482). The largest difference (± SEM) at each site was as follows: neck 0.2 (0.1) °C; scapula 0.2 (0.0) °C; shin 0.1 (0.0) °C and hand 0.1 (0.1) °C. The largest potential error (mean ± SD) in weighted mean skin temperature was 0.4 (0.1) °C (P < 0.001) and the associated 95% limits of agreement for these differences was 0.2 to 0.5 °C. Although we observed differences in local and mean skin temperature based on the region of interest employed, these differences were minimal and are not considered physiologically meaningful.
Resumo:
Purpose: Skin temperature assessment has historically been undertaken with conductive devices affixed to the skin. With the development of technology, infrared devices are increasingly utilised in the measurement of skin temperature. Therefore, our purpose was to evaluate the agreement between four skin temperature devices at rest, during exercise in the heat, and recovery. Methods: Mean skin temperature (T̅sk) was assessed in thirty healthy males during 30 min rest (24.0± 1.2°C, 56 ± 8%), 30 min cycle in the heat (38.0 ± 0.5°C, 41 ± 2%), and 45 min recovery(24.0 ± 1.3°C, 56 ± 9%). T̅sk was assessed at four sites using two conductive devices(thermistors, iButtons) and two infrared devices (infrared thermometer, infrared camera). Results: Bland–Altman plots demonstrated mean bias ± limits of agreement between the thermistors and iButtons as follows (rest, exercise, recovery): -0.01 ± 0.04, 0.26 ± 0.85, -0.37 ± 0.98°C; thermistors and infrared thermometer: 0.34 ± 0.44, -0.44 ± 1.23, -1.04 ± 1.75°C; thermistors and infrared camera (rest, recovery): 0.83 ± 0.77, 1.88 ± 1.87°C. Pairwise comparisons of T̅sk found significant differences (p < 0.05) between thermistors and both infrared devices during resting conditions, and significant differences between the thermistors and all other devices tested during exercise in the heat and recovery. Conclusions: These results indicate poor agreement between conductive and infrared devices at rest, during exercise in the heat, and subsequent recovery. Infrared devices may not be suitable for monitoring T̅sk in the presence of, or following, metabolic and environmental induced heat stress.
Resumo:
The mean shift tracker has achieved great success in visual object tracking due to its efficiency being nonparametric. However, it is still difficult for the tracker to handle scale changes of the object. In this paper, we associate a scale adaptive approach with the mean shift tracker. Firstly, the target in the current frame is located by the mean shift tracker. Then, a feature point matching procedure is employed to get the matched pairs of the feature point between target regions in the current frame and the previous frame. We employ FAST-9 corner detector and HOG descriptor for the feature matching. Finally, with the acquired matched pairs of the feature point, the affine transformation between target regions in the two frames is solved to obtain the current scale of the target. Experimental results show that the proposed tracker gives satisfying results when the scale of the target changes, with a good performance of efficiency.