857 resultados para importance performance analysis
Resumo:
Examining a team’s performance from a physical point of view their momentum might indicate unexpected turning points in defeat or success. Physicists describe this value as to require some effort to be started, but also that it is relatively easy to keep it going once a sufficient level is reached (Reed and Hughes, 2006). Unlike football, rugby, handball and many more sports, a regular volleyball match is not limited by time but by points that need to be gathered. Every minute more than one point is won by either one team or the other. That means a series of successive points enlarges the gap between the teams making it more and more difficult to catch up with the leading one. This concept of gathering momentum, or the reverse in a performance, can give the coaches, athletes and sports scientists further insights into winning and losing performances. Momentum investigations also contain dependencies between performances or questions if future performances are reliant upon past streaks. Squash and volleyball share the characteristic of being played up to a certain amount of points. Squash was examined according to the momentum of players by Hughes et al. (2006). The initial aim was to expand normative profiles of elite squash players using momentum graphs of winners and errors to explore ‘turning points’ in a performance. Dynamic systems theory has enabled the definition of perturbations in sports exhibiting rhythms (Hughes et al., 2000; McGarry et al., 2002; Murray et al., 2008), and how players and teams cause these disruptions of rhythm can inform on the way they play, these techniques also contribute to profiling methods. Together with the analysis of one’s own performance it is essential to have an understanding of your oppositions’ tactical strengths and weaknesses. By modelling the oppositions’ performance it is possible to predict certain outcomes and patterns, and therefore intervene or change tactics before the critical incident occurs. The modelling of competitive sport is an informative analytic technique as it directs the attention of the modeller to the critical aspects of data that delineate successful performance (McGarry & Franks, 1996). Using tactical performance profiles to pull out and visualise these critical aspects of performance, players can build justified and sophisticated tactical plans. The area is discussed and reviewed, critically appraising the research completed in this element of Performance Analysis.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.
Resumo:
El objetivo de estudio es conocer los tipos de saques utilizados, dependiendo el momento del set en el que se producen. Este estudio ha sido desarrollado durante el torneo Nestea Spanish Master de Vóley Playa disputado en Valencia en el año 2006. La muestra de estudio la componen 10 jugadoras que conforman 5 equipos con un total de 4 encuentros analizados que suman 331 saques analizados. El análisis de las videograbaciones se llevó a cabo con el software SportCode Pro v.8.5.2. Los saques se clasificaron dependiendo del momento en el que se produjeron, siendo la franja 1 (del punto 1 al 7), franja 2 (del punto 8 al 14) y franja 3 (del punto 15 al 21). El análisis de datos se llevo a cabo con el software SPSS v.19. La prueba Chi-cuadrado, estableció diferencias significativas entre los diferentes tipos de saques para la franja 1 y 3 (p<.05), pero no se establecieron diferencias significativas en la franja 2 para los tres tipos de saque utilizado (p>.05). Se experimenta una disminución en el uso del saque en potencia (SP) de la franja 1 (84.1%) con respecto al de la franja 3 (4.8%), mientras que el saque flotante (SF) aumenta de la franja 1 (13.5%) a la franja 3 (81%). Finalmente el saque flotante (SF) en salto aumentan de la franja 1 (2.4%) a la franja 2 (28%) y decrece de esta última a la franja 3 (14.3%).
Resumo:
A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.
Resumo:
Tese de mestrado integrado, Engenharia da Energia e do Ambiente, Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
Partial differential equation (PDE) solvers are commonly employed to study and characterize the parameter space for reaction-diffusion (RD) systems while investigating biological pattern formation. Increasingly, biologists wish to perform such studies with arbitrary surfaces representing ‘real’ 3D geometries for better insights. In this paper, we present a highly optimized CUDA-based solver for RD equations on triangulated meshes in 3D. We demonstrate our solver using a chemotactic model that can be used to study snakeskin pigmentation, for example. We employ a finite element based approach to perform explicit Euler time integrations. We compare our approach to a naive GPU implementation and provide an in-depth performance analysis, demonstrating the significant speedup afforded by our optimizations. The optimization strategies that we exploit could be generalized to other mesh based processing applications with PDE simulations.
Resumo:
Using current software engineering technology, the robustness required for safety critical software is not assurable. However, different approaches are possible which can help to assure software robustness to some extent. For achieving high reliability software, methods should be adopted which avoid introducing faults (fault avoidance); then testing should be carried out to identify any faults which persist (error removal). Finally, techniques should be used which allow any undetected faults to be tolerated (fault tolerance). The verification of correctness in system design specification and performance analysis of the model, are the basic issues in concurrent systems. In this context, modeling distributed concurrent software is one of the most important activities in the software life cycle, and communication analysis is a primary consideration to achieve reliability and safety. By and large fault avoidance requires human analysis which is error prone; by reducing human involvement in the tedious aspect of modelling and analysis of the software it is hoped that fewer faults will persist into its implementation in the real-time environment. The Occam language supports concurrent programming and is a language where interprocess interaction takes place by communications. This may lead to deadlock due to communication failure. Proper systematic methods must be adopted in the design of concurrent software for distributed computing systems if the communication structure is to be free of pathologies, such as deadlock. The objective of this thesis is to provide a design environment which ensures that processes are free from deadlock. A software tool was designed and used to facilitate the production of fault-tolerant software for distributed concurrent systems. Where Occam is used as a design language then state space methods, such as Petri-nets, can be used in analysis and simulation to determine the dynamic behaviour of the software, and to identify structures which may be prone to deadlock so that they may be eliminated from the design before the program is ever run. This design software tool consists of two parts. One takes an input program and translates it into a mathematical model (Petri-net), which is used for modeling and analysis of the concurrent software. The second part is the Petri-net simulator that takes the translated program as its input and starts simulation to generate the reachability tree. The tree identifies `deadlock potential' which the user can explore further. Finally, the software tool has been applied to a number of Occam programs. Two examples were taken to show how the tool works in the early design phase for fault prevention before the program is ever run.
Resumo:
This study was concerned with the structure, functions and development, especially the performance, of some rural small firms associated with the Council for Small Industries in Rural Areas (C?SIRA) of England. Forty firms were used as the main basis of analysis. For some aspects of the investigation, however, data from another 54 firms, obtained indirectly through nine CoSIRA Organisers, were also used. For performance-analysis, the 40 firms were firstly ranked according to their growth and profitability rates which were calculated from their financial data. Then each of the variables hypothesised to be related to performance was tested to ascertain its relationship with performance, using the Spearman's Rank Correlation technique. The analysis indicated that each of the four factors .. the principal, the firm itself, its management, and the environment - had a bearing upon the performance of the firm. Within the first factor, the owner-manager's background and attitudes were found to be most important; in the second, the firm's size, age and scope of activities were also found to be correlated with performance; with respect to the third, it was revealed that firms which practised some forms of systems in planning, control and costing performed better than those which did not and, finally with respect to the fourth factor, it was found that some of the services provided by CoSIRA, especially credit finance, were facilitative to the firm's performance. Another significant facet of the firms highlighted by the study was their multifarious roles. These, meeting economic, psychological, sociological and political needs, were considered to be most useful to man and his society. Finally, the study has added light to the structural characteristics of the sampled firms, including various aspects of their development, orientation and organisation, as well as their various structural strengths and weakness. ' .
Towards a web-based progressive handwriting recognition environment for mathematical problem solving
Resumo:
The emergence of pen-based mobile devices such as PDAs and tablet PCs provides a new way to input mathematical expressions to computer by using handwriting which is much more natural and efficient for entering mathematics. This paper proposes a web-based handwriting mathematics system, called WebMath, for supporting mathematical problem solving. The proposed WebMath system is based on client-server architecture. It comprises four major components: a standard web server, handwriting mathematical expression editor, computation engine and web browser with Ajax-based communicator. The handwriting mathematical expression editor adopts a progressive recognition approach for dynamic recognition of handwritten mathematical expressions. The computation engine supports mathematical functions such as algebraic simplification and factorization, and integration and differentiation. The web browser provides a user-friendly interface for accessing the system using advanced Ajax-based communication. In this paper, we describe the different components of the WebMath system and its performance analysis.
Resumo:
We present what is to our knowledge the first comprehensive investigation of the use of blazed fiber Bragg gratings (BFBGs) to interrogate wavelength division multiplexed (WDM) in-fiber optical sensor arrays. We show that the light outcoupled from the core of these BFBGs is radiated with sufficient optical power that it may be detected with a low-cost charge-coupled device (CCD) array. We present thorough system performance analysis that shows sufficient spectral-spatial resolution to decode sensors with a WDM separation of 75 ρm, signal-to-noise ratio greater than 45-dB bandwidth of 70 nm, and drift of only 0.1 ρm. We show the system to be polarization-state insensitive, making the BFBG-CCD spectral analysis technique a practical, extremely low-cost, alternative to traditional tunable filter approaches.
Resumo:
All-optical signal processing is a powerful tool for the processing of communication signals and optical network applications have been routinely considered since the inception of optical communication. There are many successful optical devices deployed in today’s communication networks, including optical amplification, dispersion compensation, optical cross connects and reconfigurable add drop multiplexers. However, despite record breaking performance, all-optical signal processing devices have struggled to find a viable market niche. This has been mainly due to competition from electro-optic alternatives, either from detailed performance analysis or more usually due to the limited market opportunity for a mid-link device. For example a wavelength converter would compete with a reconfigured transponder which has an additional market as an actual transponder enabling significantly more economical development. Never-the-less, the potential performance of all-optical devices is enticing. Motivated by their prospects of eventual deployment, in this chapter we analyse the performance and energy consumption of digital coherent transponders, linear coherent repeaters and modulator based pulse shaping/frequency conversion, setting a benchmark for the proposed all-optical implementations.
Resumo:
Performance analysis has become a vital part of the management practices in the banking industry. There are numerous applications using DEA models to estimate efficiency in banking, and most of them assume that inputs and outputs are known with absolute precision. Here, we propose new Fuzzy-DEA α-level models to assess underlying uncertainty. Further, bootstrap truncated regressions with fixed factors are used to measure the impact of each model on the efficiency scores and to identify the most relevant contextual variables on efficiency. The proposed models have been demonstrated using an application in Mozambican banks to handle the underlying uncertainty. Findings reveal that fuzziness is predominant over randomness in interpreting the results. In addition, fuzziness can be used by decision-makers to identify missing variables to help in interpreting the results. Price of labor, price of capital, and market-share were found to be the significant factors in measuring bank efficiency. Managerial implications are addressed.
Resumo:
Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. Personal Health Record (PHR) system has become a popular research area for sharing patients informa- tion very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect patients' private data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed for this purpose. Attribute-based encryption can resolve these problems, we propose a patient-centric framework that protects PHRs against untrusted service providers and malicious users. In this framework, we have used Ciphertext Policy Attribute Based Encryption scheme as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation. Patients can encrypt their PHRs and store them on untrusted storage servers. They also maintain full control over access to their PHR data by assigning attribute-based access control to selected data users, and revoking unauthorized users instantly. In order to evaluate our system, we implemented CP-ABE library and web services as part of our framework. We also developed an android application based on the framework that allows users to register into the system, encrypt their PHR data and upload to the server, and at the same time authorized users can download PHR data and decrypt it. Finally, we present experimental results and performance analysis. It shows that the deployment of the proposed system would be practical and can be applied into practice.
Resumo:
This paper is on the use and performance of M-path polyphase Infinite Impulse Response (IIR) filters for channelisation, conventionally where Finite Impulse Response (FIR) filters are preferred. This paper specifically focuses on the Discrete Fourier Transform (DFT) modulated filter banks, which are known to be an efficient choice for channelisation in communication systems. In this paper, the low-pass prototype filter for the DFT filter bank has been implemented using an M-path polyphase IIR filter and we show that the spikes present at the stopband can be avoided by making use of the guardbands between narrowband channels. It will be shown that the channelisation performance will not be affected when polyphase IIR filters are employed instead of their counterparts derived from FIR prototype filters. Detailed complexity and performance analysis of the proposed use will be given in this article.
Resumo:
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, the authors propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, they derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. They optimise jointly the number of partial devices and the per-device power saving in order to maximise the average system rate under the power requirement. Through the authors’ results, they finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.