259 resultados para Inequality constraint
Resumo:
The quality and bitrate modeling is essential to effectively adapt the bitrate and quality of videos when delivered to multiplatform devices over resource constraint heterogeneous networks. The recent model proposed by Wang et al. estimates the bitrate and quality of videos in terms of the frame rate and quantization parameter. However, to build an effective video adaptation framework, it is crucial to incorporate the spatial resolution in the analytical model for bitrate and perceptual quality adaptation. Hence, this paper proposes an analytical model to estimate the bitrate of videos in terms of quantization parameter, frame rate, and spatial resolution. The model can fit the measured data accurately which is evident from the high Pearson correlation. The proposed model is based on the observation that the relative reduction in bitrate due to decreasing spatial resolution is independent of the quantization parameter and frame rate. This modeling can be used for rate-constrained bit-stream adaptation scheme which selects the scalability parameters to optimize the perceptual quality for a given bandwidth constraint.
Resumo:
Delegation is a powerful mechanism to provide flexible and dynamic access control decisions. Delegation is particularly useful in federated environments where multiple systems, with their own security autonomy, are connected under one common federation. Although many delegation schemes have been studied, current models do not seriously take into account the issue of delegation commitment of the involved parties. In order to address this issue, this paper introduces a new mechanism to help parties involved in the delegation process to express commitment constraints, perform the commitments and track the committed actions. This mechanism looks at two different aspects: pre-delegation commitment and post-delegation commitment. In pre-delegation commitment, this mechanism enables the involved parties to express the delegation constraints and address those constraints. The post-delegation commitment phase enables those parties to inform the delegator and service providers how the commitments are conducted. This mechanism utilises a modified SAML assertion structure to support the proposed delegation and constraint approach.
Resumo:
This paper presents a method for calculating the in-bucket payload volume on a dragline for the purpose of estimating the material’s bulk density in real-time. Knowledge of the bulk density can provide instant feedback to mine planning and scheduling to improve blasting and in turn provide a more uniform bulk density across the excavation site. Furthermore costs and emissions in dragline operation, maintenance and downstream material processing can be reduced. The main challenge is to determine an accurate position and orientation of the bucket with the constraint of real-time performance. The proposed solution uses a range bearing and tilt sensor to locate and scan the bucket between the lift and dump stages of the dragline cycle. Various scanning strategies are investigated for their benefits in this real-time application. The bucket is segmented from the scene using cluster analysis while the pose of the bucket is calculated using the iterative closest point (ICP) algorithm. Payload points are segmented from the bucket by a fixed distance neighbour clustering method to preserve boundary points and exclude low density clusters introduced by overhead chains and the spreader bar. A height grid is then used to represent the payload from which the volume can be calculated by summing over the grid cells. We show volume calculated on a scaled system with an accuracy of greater than 95 per cent.
A Modified inverse integer Cholesky decorrelation method and the performance on ambiguity resolution
Resumo:
One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.
Resumo:
Malaysian urban river corridors are facing major physical transformations in the 21st century. The effects of rapid development exacerbated by the competition between two key industry sectors, commercial base and tourism development in conjunction with urbanisation and industrialisation, have posted a high demand for the uses of these spaces. The political scenario and lack on consideration of ecological principles in its design solution have sparked stiff environmental and cultural constraint towards its landscape character as well as the ecological system. Therefore, a holistic approach towards improving the landscape design processes is extremely necessary to protect values of these places. Limited research has been carried out and further has created an urgent need to explore better ways to improve the landscape design processes of Malaysian urban river corridor developments that encompass the needs and aspirations of its multi-ethnic society without making any drastic changes to the landscape character of the rivers. This paper provides a brief introduction to address this significant gap and hence serves to contribute to the literature review.
Resumo:
The role of ecological constraints on the acquisition of sport expertise is gaining attention in sport science, although more research is needed. In this position paper we provide an ecological explanation for expertise acquisition, as alluding to qualitative data that support the idea that unconventional, even aversive, environmental constraints may play an important role in the development of world-class athletes. We exemplify this argument by profiling the role of unconventional practice environments using association football in Brazilian society as a task vehicle. Contrary to the traditional idea that only deliberate training and development programmes can lead to the evolution of expertise, we propose how expert performance might be gained through highly unstructured activities in Brazilian football, that represent a powerful and little understood implicit environmental constraint that can lead to expertise development in sport.
Resumo:
This paper analyzes effects of different practice task constraints on heart rate (HR) variability during 4v4 smallsided football games. Participants were sixteen football players divided into two age groups (U13, Mean age: 12.4±0.5 yrs; U15: 14.6±0.5). The task consisted of a 4v4 sub-phase without goalkeepers, on a 25x15 m field, of 15 minutes duration with an active recovery period of 6 minutes between each condition. We recorded players’ heart rates using heart rate monitors (Polar Team System, Polar Electro, Kempele, Finland) as scoring mode was manipulated (line goal: scoring by dribbling past an extended line; double goal: scoring in either of two lateral goals; and central goal: scoring only in one goal). Subsequently, %HR reserve was calculated with the Karvonen formula. We performed a time-series analysis of HR for each individual in each condition. Mean data for intra-participant variability showed that autocorrelation function was associated with more short-range dependence processes in the “line goal” condition, compared to other conditions, demonstrating that the “line goal” constraint induced more randomness in HR response. Relative to inter-individual variability, line goal constraints demonstrated lower %CV and %RMSD (U13: 9% and 19%; U15: 10% and 19%) compared with double goal (U13: 12% and 21%; U15: 12% and 21%) and central goal (U13: 14% and 24%; U15: 13% and 24%) task constraints, respectively. Results suggested that line goal constraints imposed more randomness on cardiovascular stimulation of each individual and lower inter-individual variability than double goal and central goal constraints.
Resumo:
This paper analyzes effects of different practice task constraints on heart rate (HR) variability during 4v4 smallsided football games. Participants were sixteen football players divided into two age groups (U13, Mean age: 12.4±0.5 yrs; U15: 14.6±0.5). The task consisted of a 4v4 sub-phase without goalkeepers, on a 25x15 m field, of 15 minutes duration with an active recovery period of 6 minutes between each condition. We recorded players’ heart rates using heart rate monitors (Polar Team System, Polar Electro, Kempele, Finland) as scoring mode was manipulated (line goal: scoring by dribbling past an extended line; double goal: scoring in either of two lateral goals; and central goal: scoring only in one goal). Subsequently, %HR reserve was calculated with the Karvonen formula. We performed a time-series analysis of HR for each individual in each condition. Mean data for intra-participant variability showed that autocorrelation function was associated with more short-range dependence processes in the “line goal” condition, compared to other conditions, demonstrating that the “line goal” constraint induced more randomness in HR response. Relative to inter-individual variability, line goal constraints demonstrated lower %CV and %RMSD (U13: 9% and 19%; U15: 10% and 19%) compared with double goal (U13: 12% and 21%; U15: 12% and 21%) and central goal (U13: 14% and 24%; U15: 13% and 24%) task constraints, respectively. Results suggested that line goal constraints imposed more randomness on cardiovascular stimulation of each individual and lower inter-individual variability than double goal and central goal constraints.
Resumo:
The effects of rapid development have increased pressures on these places exacerbated by the competition between two key industry sectors, commercial base and tourism development. This, in supplement with urbanisation and industrialisation, has posted a high demand for the uses of these spaces. The political scenario and lack of adaptation on ecological principles and public participations in its design approach have sparked stiff environmental, historical and cultural constraint towards its landscape character as well as the ecological system. Therefore, a holistic approach towards improving the landscape design process is extremely necessary to protect human well being, cultural, environmental and historical values of these places. Limited research also has been carried out to overcome this situation. This further has created an urgent need to explore better ways to improve the landscape design process of Malaysian heritage urban river corridor developments that encompass the needs and aspirations of the Malaysian multi-ethnic society without making any drastic changes to the landscape character of the rivers. This paper presents a methodology to develop an advanced Landscape Character Assessment (aLCA) framework for evaluating the landscape character of the places, derived from the perception of two keys yet oppositional stakeholders: urban design team and special interest public. The triangulation of subjectivist paradigm methodologies: the psychophysical approach; the psychological approach; and, the phenomenological approach will be employed. The outcome will be used to improve the present landscape design process for future development of these places. Unless a range of perspectives can be brought to bear on enhancing the form and function of their future development and management, urban river corridors in the Malaysian context will continue to decline.
Resumo:
Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known to date about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity spaces, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate the quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.
Resumo:
Reforming schooling to enable engagement and success for those typically marginalised and failed by schools is a necessary task for educational researchers and activists concerned with injustice. However, it is a difficult pursuit, with a long history of failed attempts. This paper outlines the rationale of an Australian partnership research project, Redesigning Pedagogies in the North (RPiN), which took on such an effort in public secondary schooling contexts that, in current times, are beset with 'crisis' conditions and constrained by policy rationales that make it difficult to pursue issues of justice. Within the project, university investigators and teachers collaborated in action research that drew on a range of conceptual resources for redesigning curriculum and pedagogies, including: funds of knowledge, vernacular or local literacies; place-based education; the 'productive pedagogies' and the 'unofficial curriculum' of popular culture and out-of-school learning settings. In bringing these resources together with the aim of interrupting the reproduction of inequality, the project developed a methodo-logic which builds on Bourdieuian insights.
Resumo:
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin
Resumo:
There have been notable advances in learning to control complex robotic systems using methods such as Locally Weighted Regression (LWR). In this paper we explore some potential limits of LWR for robotic applications, particularly investigating its application to systems with a long horizon of temporal dependence. We define the horizon of temporal dependence as the delay from a control input to a desired change in output. LWR alone cannot be used in a temporally dependent system to find meaningful control values from only the current state variables and output, as the relationship between the input and the current state is under-constrained. By introducing a receding horizon of the future output states of the system, we show that sufficient constraint is applied to learn good solutions through LWR. The new method, Receding Horizon Locally Weighted Regression (RH-LWR), is demonstrated through one-shot learning on a real Series Elastic Actuator controlling a pendulum.
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.