969 resultados para performance constraints


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although employees are encouraged to take exercise after work to keep physically fit, they should not suffer injury. Some sports injuries that occur after work appear to be work-related and preventable. This study investigated whether cognitive failure mediates the influence of mental work demands and conscientiousness on risk-taking and risky and unaware behaviour during after-work sports activities. Participants were 129 employees (36% female) who regularly took part in team sports after work. A structural equation model showed that work-related cognitive failure significantly mediated the influence of mental work demands on risky behaviour during sports (p < .05) and also mediated the directional link between conscientiousness and risky behaviour during sports (p < .05). A path from risky behaviour during sports to sports injuries in the last four weeks was also significant (p < .05). Performance constraints, time pressure, and task uncertainty are likely to increase cognitive load and thereby boost cognitive failures both during work and sports activities after work. Some sports injuries after work could be prevented by addressing the issue of work redesign.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With more experience in the labor market, some job characteristics increase, some decrease. For example, among young employees who just entered the labor market, job control may initially be low but increase with more routine and experience. Job control is a job resource that is valued in itself and is positively associated with job satisfaction; but job control also helps dealing with stressors at work. There is little research on correlated changes, but the existing evidence suggests a joint development over time. However, even less is known about the relevance of such changes for employees. Usually, research tends to use mean levels to predict mean levels in outcomes but development in job control and stressors may be as relevant for job satisfaction as having a certain level in those job characteristics. Job satisfaction typically is regarded as a positive attitude towards one’s work. What has received less attention is that some employees may lower their expectations if their job situation does not reflect their needs, resulting in a resigned attitude towards one’s job. The present study investigates the development of job control and task-related stressors over ten years and tests the predictive value of changes in job control and task-related stressors for resigned attitude towards one’s job. We used data from a Swiss panel study (N=356) ranging over ten years. Job control, task-related stressors (an index consisting of time pressure, concentration demands, performance constraints, interruptions, and uncertainty about tasks), and resigned attitude towards one’s job were assessed in 1998, 1999, 2001, and 2008. Latent growth modeling revealed that growth rates of job control and task-related stressors were not correlated with one another. We predicted resigned attitude towards one’s job in 2008 a) by initial levels, and b) by changes in job control and stressors, controlling for resigned attitude in 1998. There was some prediction by initial levels (job control: β = -.15, p < .05; task-related stressors: β = .12, p = .06). However, as expected, changes in control and stressors predicted resigned attitude much better, with β = -.37, p < .001, for changes in job control, and β = .31, p < .001, for changes in task-related stressors. Our data confirm the importance of having low levels of task-related stressors and higher levels of job control for job attitudes. However, development in these job characteristics seems even more important than initial levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this work was to design, construct, test and operate a novel circulating fluid bed fast pyrolysis reactor system for production of liquids from biomass. The novelty lies in incorporating an integral char combustor to provide autothermal operation. A reactor design methodology was devised which correlated input parameters to process variables, namely temperature, heat transfer and gas/vapour residence time, for both the char combustor and biomass pyrolyser. From this methodology a CFB reactor was designed with integral char combustion for 10 kg/h biomass throughput. A full-scale cold model of the CFB unit was constructed and tested to derive suitable hydrodynamic relationships and performance constraints. Early difficulties encountered with poor solids circulation and inefficient product recovery were overcome by a series of modifications. A total of 11 runs in a pyrolysis mode were carried out with a maximum total liquids yield of 61.50% wt on a maf biomass basis, obtained at 500°C and with 0.46 s gas/vapour residence time. This could be improved by improved vapour recovery by direct quenching up to an anticipated 75 % wt on a moisture-and-ash-free biomass basis. The reactor provides a very high specific throughput of 1.12 - 1.48 kg/hm2 and the lowest gas-to-feed ratio of 1.3 - 1.9 kg gas/kg feed compared to other fast pyrolysis processes based on pneumatic reactors and has a good scale-up potential. These features should provide significant capital cost reduction. Results to date suggest that the process is limited by the extent of char combustion. Future work will address resizing of the char combustor to increase overall system capacity, improvement in solid separation and substantially better liquid recovery. Extended testing will provide better evaluation of steady state operation and provide data for process simulation and reactor modeling.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The influence of different instructional constraints on movement organisation and performance outcomes of the penalty kick (PK) was investigated according to participant age. Sixty penalty takers and twelve goalkeepers from two age groups (under 15 and under 17) performed 300 PKs under five different task conditions, including: no explicit instructional constraints provided (Control); instructional constraints on immobility (IMMOBILE) and mobility (MOBILE) of goalkeepers; and, use of keeper-dependent (DEP) and independent (INDEP) strategies by penalty takers. Every trial was video recorded and digitised using motion analysis techniques. Dependent variables (DVs) were: movement speed of penalty takers and the angles between the goalkeeper's position and the goal line (i.e., diving angle), and between the penalty taker and a line crossing the penalty spot and the centre of the goal (i.e., run up angle). Instructions significantly influenced the way that goalkeepers (higher values in MOBILE relative to Control) and penalty takers (higher values in Control than in DEP) used movement speed during performance, as well as the goalkeepers' movements and diving angle (less pronounced dives in the MOBILE condition compared with INDEP). Results showed how different instructions constrained participant movements during performance, although players' performance efficacy remained constant, reflecting their adaptive variability.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coordination of dynamic interceptive movements is predicated on cyclical relations between an individual's actions and information sources from the performance environment. To identify dynamic informational constraints, which are interwoven with individual and task constraints, coaches’ experiential knowledge provides a complementary source to support empirical understanding of performance in sport. In this study, 15 expert coaches from 3 sports (track and field, gymnastics and cricket) participated in a semi-structured interview process to identify potential informational constraints which they perceived to regulate action during run-up performance. Expert coaches’ experiential knowledge revealed multiple information sources which may constrain performance adaptations in such locomotor pointing tasks. In addition to the locomotor pointing target, coaches’ knowledge highlighted two other key informational constraints: vertical reference points located near the locomotor pointing target and a check mark located prior to the locomotor pointing target. This study highlights opportunities for broadening the understanding of perception and action coupling processes, and the identified information sources warrant further empirical investigation as potential constraints on athletic performance. Integration of experiential knowledge of expert coaches with theoretically driven empirical knowledge represents a promising avenue to drive future applied science research and pedagogical practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Utilising quantitative and qualitative research methods the thesis explored how movement patterns were coordinated under different conditions in elite athletes. Results revealed each elite athlete's ability to use multiple, varied information sources to guide successful task performance, highlighting the specific role of surrounding objects in the performance environment to perceptually guide behaviour. Combining elite coaching knowledge with empirical research enhanced understanding of the role of vision in regulating interceptive behaviours, enhancing the representative design of training environments. The main findings have been applied to training design of the Athletics Australia National Jumps Centre at the Queensland Academy of Sport in preparation for the World Indoor Championships, World Championships, and Olympic Games for Australian long and triple jumpers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this note, we present a method to characterize the degradation in performance that arises in linear systems due to constraints imposed on the magnitude of the control signal to avoid saturation effects. We do this in the context of cheap control for tracking step signals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objectives of this study were to determine the impact of different instructional constraints on standing board jump (sbj) performance in children and understand the underlying changes in emergent movement patterns. Two groups of novice participants were provided with either externally or internally focused attentional instructions during an intervention phase. Pre- and post-test sessions were undertaken to determine changes to performance and movement patterns. Thirty-six primary fourth-grade male students were recruited for this study and randomly assigned to either an external, internal focus or control group. Different instructional constraints with either an external focus (image of the achievement) or an internal focus (image of the act) were provided to the participants. Performance scores (jump distances), and data from key kinematic (joint range of motion, ROM) and kinetic variables (jump impulses) were collected. Instructional constraints with an emphasis on an external focus of attention were generally more effective in assisting learners to improve jump distances. Intra-individual analyses highlighted how enhanced jump distances for successful participants may be concomitant with specific changes to kinematic and kinetic variables. Larger joint ROM and adjustment to a comparatively larger horizontal impulse to a vertical impulse were observed for more successful participants at post-test performance. From a constraints-led perspective, the inclusion of instructional constraints encouraging self-adjustments in the control of movements (i.e., image of achievement) had a beneficial effect on individuals performing the standing broad jump task. However, the advantage of using an external focus of attentional instructions could be task- and individual-specific.