904 resultados para Bounds
Resumo:
Over 800 cities globally now offer bikeshare programs. One of their purported benefits is increased physical activity. Implicit in this claim is that bikeshare replaces sedentary modes of transport, particularly car use. This paper estimates the median changes in physical activity levels as a result of bikeshare in the cities of Melbourne, Brisbane, Washington, D.C., London, and Minneapolis/St. Paul. This study is the first known multi-city evaluation of the active travel impacts of bikeshare programs. To perform the analysis, data on mode substitution (i.e. the modes that bikeshare replaces) were used to determine the extent of shift from sedentary to active transport modes (e.g. when a car trip is replaced by bikeshare). Potentially offsetting these gains, reductions in physical activity when walking trips are replaced by bikeshare was also estimated. Finally a Markov Chain Monte Carlo analysis was conducted to estimate confidence bounds on estimated impacts on active travel given uncertainties in data sources. The results indicate that on average 60% of bikeshare trips replace sedentary modes of transport (from 42% in Minneapolis/St. Paul to 67% in Brisbane). When bikeshare replaces a walking trip, there is a reduction in active travel time because walking a given distance takes longer than cycling. Considering the active travel balance sheet for the cities included in this analysis, bikeshare activity in 2012 has an overall positive impact on active travel time. This impact ranges from an additional 1.4 million minutes of active travel for the Minneapolis/St. Paul bikeshare program, to just over 74 million minutes of active travel for the London program The analytical approach adopted to estimate bikeshare’s impact on active travel may act as the basis for future bikeshare evaluations or feasibility studies.
Resumo:
The growing call for physical educators to move beyond the bounds of performance has been a powerful discourse. However, it is a discourse that has tended to be heavy on theory but light on practical application. This paper discusses recent work in the area of skill acquisition and what this might mean for pedagogical practices in physical education. The acquisition of motor skill has traditionally been a core objective for physical educators, and there has been a perception that child-centred pedagogies have failed in the achievement of this traditional yardstick. However, drawing from the work of Rovegno and Kirk (1995) and Langley (1995; 1997), and making links with current work in the motor learning area, it is possible to show that skill acquisition is not necessarily compromised by child-centred pedagogy. Indeed, working beyond Mosston's discovery threshold and using models such as Games for Understanding, can provide deeper skill-learning experiences as well as being socially just.
Resumo:
Fossils provide the principal basis for temporal calibrations, which are critical to the accuracy of divergence dating analyses. Translating fossil data into minimum and maximum bounds for calibrations is the most important, and often least appreciated, step of divergence dating. Properly justified calibrations require the synthesis of phylogenetic, paleontological, and geological evidence and can be difficult for non-specialists to formulate. The dynamic nature of the fossil record (e.g., new discoveries, taxonomic revisions, updates of global or local stratigraphy) requires that calibration data be updated continually lest they become obsolete. Here, we announce the Fossil Calibration Database (http://fossilcalibrations.org), a new open-access resource providing vetted fossil calibrations to the scientific community. Calibrations accessioned into this database are based on individual fossil specimens and follow best practices for phylogenetic justification and geochronological constraint. The associated Fossil Calibration Series, a calibration-themed publication series at Palaeontologia Electronica, will serve as one key pipeline for peer-reviewed calibrations to enter the database.
Resumo:
With the introduction of relaxed-clock molecular dating methods, the role of fossil calibration has expanded from providing a timescale, to also informing the models for molecular rate variation across the phylogeny. Here I suggest fossil calibration bounds for four mammal clades, Monotremata (platypus and echidnas), Macropodoidea (kangaroos and potoroos), Caviomorpha-Phiomorpha (South American and African hystricognath rodents), and Chiroptera (bats). In each case I consider sources of uncertainty in the fossil record and provide a molecular dating analysis to examine how the suggested calibration priors are further informed by other mammal fossil calibrations and molecular data.
Resumo:
This study investigates friendships between gay sales associates and heterosexual female customers in luxury retail settings. By employing grounded theory methodology, the study integrates theories and findings from diverse literature streams into an original conceptual framework to illustrate the resources gay sales associates and straight female customers receive from and provide to each other during retail exchanges. The study explains why gay male–straight female friendships are uniquely suited for luxury consumption settings. Female customers characterize their friendships with gay sales associates as providing honesty, security, trust, and comfort, which stems from the absence of sexual interest and a lack of inter-female competition. Gay sales associates receive acceptance for who they are and for their displays of unconventional masculinity in retail settings. They also obtain a temporary rite from their female customers, a so-called mandate of privacy, which permits both parties to ignore the bounds of modesty and accept a degree of intimacy. Such intimacy facilitates transactions that require both personalization and customer–employee closeness, such as the selling of high-end apparel, accessories, and jewelry.
Resumo:
Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.
Resumo:
We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.
Resumo:
We study linear control problems with quadratic losses and adversarially chosen tracking targets. We present an efficient algorithm for this problem and show that, under standard conditions on the linear system, its regret with respect to an optimal linear policy grows as O(log^2 T), where T is the number of rounds of the game. We also study a problem with adversarially chosen transition dynamics; we present an exponentiallyweighted average algorithm for this problem, and we give regret bounds that grow as O(sqtr p T).
Resumo:
South Africa is an emerging and industrializing economy which is experiencing remarkable progress. We contend that amidst the developments in the economy, the role of energy, trade openness and financial development are critical. In this article, we revisit the pivotal role of these factors. We use the ARDL bounds [72], the Bayer and Hanck [11] cointegration techniques, and an extended Cobb–Douglas framework, to examine the long-run association with output per worker over the sample period 1971–2011. The results support long-run association between output per worker, capital per worker and the shift parameters. The short-run elasticity coefficients are as follows: energy (0.24), trade (0.07), financial development (−0.03). In the long-run, the elasticity coefficients are: trade openness (0.05), energy (0.29), and financial development (−0.04). In both the short-run and the long-run, we note the post-2000 period has a marginal positive effect on the economy. The Toda and Yamamoto [91] Granger causality results show that a unidirectional causality from capital stock and energy consumption to output; and from capital stock to trade openness; a bidirectional causality between trade openness and output; and absence (neutrality) of any causality between financial development and output thus indicating that these two variables evolve independent of each other.
Resumo:
The Taylor coefficients c and d of the EM form factor of the pion are constrained using analyticity, knowledge of the phase of the form factor in the time-like region, 4m(pi)(2) <= t <= t(in) and its value at one space-like point, using as input the (g - 2) of the muon. This is achieved using the technique of Lagrange multipliers, which gives a transparent expression for the corresponding bounds. We present a detailed study of the sensitivity of the bounds to the choice of time-like phase and errors present in the space-like data, taken from recent experiments. We find that our results constrain c stringently. We compare our results with those in the literature and find agreement with the chiral perturbation-theory results for c. We obtain d similar to O(10) GeV-6 when c is set to the chiral perturbation-theory values.
Resumo:
Time-frequency analysis of various simulated and experimental signals due to elastic wave scattering from damage are performed using wavelet transform (WT) and Hilbert-Huang transform (HHT) and their performances are compared in context of quantifying the damages. Spectral finite element method is employed for numerical simulation of wave scattering. An analytical study is carried out to study the effects of higher-order damage parameters on the reflected wave from a damage. Based on this study, error bounds are computed for the signals in the spectral and also on the time-frequency domains. It is shown how such an error bound can provide all estimate of error in the modelling of wave propagation in structure with damage. Measures of damage based on WT and HHT is derived to quantify the damage information hidden in the signal. The aim of this study is to obtain detailed insights into the problem of (1) identifying localised damages (2) dispersion of multifrequency non-stationary signals after they interact with various types of damage and (3) quantifying the damages. Sensitivity analysis of the signal due to scattered wave based on time-frequency representation helps to correlate the variation of damage index measures with respect to the damage parameters like damage size and material degradation factors.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
A k-dimensional box is the cartesian product R-1 x R-2 x ... x R-k where each R-i is a closed interval on the real line. The boxicity of a graph G,denoted as box(G), is the minimum integer k such that G is the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the cartesian product R-1 x R-2 x ... x R-k where each Ri is a closed interval on the real line of the form [a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum k such that G is the intersection graph of a collection of k-cubes. In this paper we show that cub(G) <= t + inverted right perpendicularlog(n - t)inverted left perpendicular - 1 and box(G) <= left perpendiculart/2right perpendicular + 1, where t is the cardinality of a minimum vertex cover of G and n is the number of vertices of G. We also show the tightness of these upper bounds. F.S. Roberts in his pioneering paper on boxicity and cubicity had shown that for a graph G, box(G) <= left perpendicularn/2right perpendicular and cub(G) <= inverted right perpendicular2n/3inverted left perpendicular, where n is the number of vertices of G, and these bounds are tight. We show that if G is a bipartite graph then box(G) <= inverted right perpendicularn/4inverted left perpendicular and this bound is tight. We also show that if G is a bipartite graph then cub(G) <= n/2 + inverted right perpendicularlog n inverted left perpendicular - 1. We point out that there exist graphs of very high boxicity but with very low chromatic number. For example there exist bipartite (i.e., 2 colorable) graphs with boxicity equal to n/4. Interestingly, if boxicity is very close to n/2, then chromatic number also has to be very high. In particular, we show that if box(G) = n/2 - s, s >= 0, then chi (G) >= n/2s+2, where chi (G) is the chromatic number of G.
Resumo:
In this paper, the validity of'single fault assumption in deriving diagnostic test sets is examined with respect to crosspoint faults in programmable logic arrays (PLA's). The control input procedure developed here can be used to convert PLA's having undetectable crosspoint faults to crosspoint-irredundant PLA's for testing purposes. All crosspoints will be testable in crosspoint-irredundant PLA's. The control inputs are used as extra variables during testing. They are maintained at logic I during normal operation. A useful heuristic for obtaining a near-minimal number of control inputs is suggested. Expressions for calculating bounds on the number of control inputs have also been obtained.
Resumo:
This dissertation analyses the notions of progress and common good in Swedish political language during the Age of Liberty (1719 1772). The method used is conceptual analysis, but this study is also a contribution to the history of political ideas and political culture, aiming at a broader understanding of how the bounds of political community were conceptualised and represented in eighteenth-century Sweden. The research is based on the official documents of the regime, such as the fundamental laws and the solemn speeches made at the opening and closing of the Diet, on normative or alternative descriptions of society such as history works and economic literature, and on practical political writings by the Diet and its members. The rhetoric of common good and particular interest is thus examined both in its consensual and theoretical contexts and in practical politics. Central political issues addressed include the extent of economic liberties, the question of freedom to print, the meaning of privilege, the position of particular estates or social groups and the economic interests of particular areas or persons. This research shows that the modern Swedish word for progress (framsteg) was still only rarely used in the eighteenth century, while the notion of progress, growth and success existed in a variety of closely related terms and metaphorical expressions. The more traditional concept of common good (allmänna bästa) was used in several variants, some of which explicitly related to utility and interest. The combination of public utility and private interest in political discourse challenged traditional ideals of political morality, where virtue had been the fundament of common good. The progress of society was also presented as being linked to the progress of liberty, knowledge and wealth in a way that can be described as characteristic of the Age of Enlightenment but which also points at the appearance of early liberal thought.