941 resultados para end user computing application streaming horizon workspace portalvmware view
Resumo:
Swimming is a sport well suited to any age group, it worked properly when the individual stimuli that provides assist in human development. But water stimulation, is not something that is so easily in physical education in Brazil. Development corresponds to changes that occur throughout life, but that depends on the stimuli provided. Several studies have shown positive effects of swimming in relation to the child. This study aimed to verify if children swimmers and physical education classes have better motor development than those who do not practice the sport. Collection was done with two groups of 10 students between 9 and 11 years. As a group consisting of non-swimmers, and another formed by those that complement the physical education class with the regular practice of swimming. To develop the study was administered a battery of the proposed motor adapted Rosa Neto (2002) in which each child performed tasks corresponding to their chronological age (CA) and were only able to perform the task at a later age, when it was obtained successful in completing the task as originally proposed. At the end of the application of all statistical tests was done getting mean and standard deviation of the old motor (IM), calculated the ratio General Motor (MGQ) for the classification of the child's motor development and application of the t test with p <0.05 to observe the level of significance in the results. The results showed that the group swimmers had better agility tests which resulted in a difference between (IC) and (IM) higher compared with the non-practitioner. They also had higher average (MGQ), but these were not sufficient to classify the group into another level of development, however, by observing isolated cases, two children that practice would be a greater level of development. Through the t test was observed that the group swimmers had significant difference in balance ability. We conclude that the swimming group got positive overall...
Resumo:
Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Resumo:
In many countries buildings are responsible for a substantial part of the energy consumption, nd it varies according to their energetic and environmental performances. The potential for major reductions in buildings consumption have bee well documented in Brazil. Opportunities have been identified throughout the life cycle of the buildings, due of projects in diverse locations without the proper adjustments. This article offers a reflection about project processes and how its understanding can be conducted in an integrated way, favoring the use of natural resources and lowering energy consumption. It concludes by indicating that the longest phase in the life cycle of a building is also the phase responsible for its largest energy consumption, not only because of its duration but also for the interaction with the end user. Therefore, in order to harvest the energy cost reduction potential from future buildings designers need a holistic view of the surrounding, end users, materials and methodologies.
Resumo:
The purpose of this thesis is to enhance the functionalities of GAFFE, a flexible, interactive and user-friendly application for editing metadata in office documents by supporting different ontologies stored inside and outside of the digital document, by adding new views and forms and by improving its ease of use.
Resumo:
Dopo aver analizzato il conflitto, le sue funzioni e le modalità di gestione, l'autore si sofferma dapprima sulle varie tipologie di mediazione per poi focalizzare l'attenzione sulla mediazione civile e commerciale evidenziando i dati disponibili dall'entrata in vigore del tentativo obbligatorio come condizione di procedibilità della domanda giudiziale per le materie civili, alla fine del 2013.
Resumo:
Topologische Beschränkungen beeinflussen die Eigenschaften von Polymeren. Im Rahmen dieser Arbeit wird mit Hilfe von Computersimulationen im Detail untersucht, inwieweit sich die statischen Eigenschaften von kollabierten Polymerringen, Polymerringen in konzentrierten Lösungen und aus Polymerringen aufgebauten Bürsten mit topologischen Beschränkungen von solchen ohne topologische Beschränkungen unterscheiden. Des Weiteren wird analysiert, welchen Einfluss geometrische Beschränkungen auf die topologischen Eigenschaften von einzelnen Polymerketten besitzen. Im ersten Teil der Arbeit geht es um den Einfluss der Topologie auf die Eigenschaften einzelner Polymerketten in verschiedenen Situationen. Da allerdings gerade die effiziente Durchführung von Monte-Carlo-Simulationen von kollabierten Polymerketten eine große Herausforderung darstellt, werden zunächst drei Bridging-Monte-Carlo-Schritte für Gitter- auf Kontinuumsmodelle übertragen. Eine Messung der Effizienz dieser Schritte ergibt einen Beschleunigungsfaktor von bis zu 100 im Vergleich zum herkömmlichen Slithering-Snake-Algorithmus. Darauf folgt die Analyse einer einzelnen, vergröberten Polystyrolkette in sphärischer Geometrie hinsichtlich Verschlaufungen und Knoten. Es wird gezeigt, dass eine signifikante Verknotung der Polystrolkette erst eintritt, wenn der Radius des umgebenden Kapsids kleiner als der Gyrationsradius der Kette ist. Des Weiteren werden sowohl Monte-Carlo- als auch Molekulardynamiksimulationen sehr großer Ringe mit bis zu einer Million Monomeren im kollabierten Zustand durchgeführt. Während die Konfigurationen aus den Monte-Carlo-Simulationen aufgrund der Verwendung der Bridging-Schritte sehr stark verknotet sind, bleiben die Konfigurationen aus den Molekulardynamiksimulationen unverknotet. Hierbei zeigen sich signifikante Unterschiede sowohl in der lokalen als auch in der globalen Struktur der Ringpolymere. Im zweiten Teil der Arbeit wird das Skalierungsverhalten des Gyrationsradius der einzelnen Polymerringe in einer konzentrierten Lösung aus völlig flexiblen Polymerringen im Kontinuum untersucht. Dabei wird der Anfang des asymptotischen Skalierungsverhaltens, welches mit dem Modell des “fractal globules“ konsistent ist, erreicht. Im abschließenden, dritten Teil dieser Arbeit wird das Verhalten von Bürsten aus linearen Polymeren mit dem von Ringpolymerbürsten verglichen. Dabei zeigt sich, dass die Struktur und das Skalierungsverhalten beider Systeme mit identischem Dichteprofil parallel zum Substrat deutlich voneinander abweichen, obwohl die Eigenschaften beider Systeme in Richtung senkrecht zum Substrat übereinstimmen. Der Vergleich des Relaxationsverhaltens einzelner Ketten in herkömmlichen Polymerbürsten und Ringbürsten liefert keine gravierenden Unterschiede. Es stellt sich aber auch heraus, dass die bisher verwendeten Erklärungen zur Relaxationsverhalten von herkömmlichen Bürsten nicht ausreichen, da diese lediglich den anfänglichen Zerfall der Korrelationsfunktion berücksichtigen. Bei der Untersuchung der Dynamik einzelner Monomere in einer herkömmlichen Bürste aus offenen Ketten vom Substrat hin zum offenen Ende zeigt sich, dass die Monomere in der Mitte der Kette die langsamste Relaxation besitzen, obwohl ihre mittlere Verrückung deutlich kleiner als die der freien Endmonomere ist.
Resumo:
The increasing usage of wireless networks creates new challenges for wireless access providers. On the one hand, providers want to satisfy the user demands but on the other hand, they try to reduce the operational costs by decreasing the energy consumption. In this paper, we evaluate the trade-off between energy efficiency and quality of experience for a wireless mesh testbed. The results show that by intelligent service control, resources can be better utilized and energy can be saved by reducing the number of active network components. However, care has to be taken because the channel bandwidth varies in wireless networks. In the second part of the paper, we analyze the trade-off between energy efficiency and quality of experience at the end user. The results reveal that a provider's service control measures do not only reduce the operational costs of the network but also bring a second benefit: they help maximize the battery lifetime of the end-user device.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
Resumo:
Effective techniques for organizing and visualizing large image collections are in growing demand as visual search gets increasingly popular. iMap is a treemap representation for visualizing and navigating image search and clustering results based on the evaluation of image similarity using both visual and textual information. iMap not only makes effective use of available display area to arrange images but also maintains stable update when images are inserted or removed during the query. A key challenge of using iMap lies in the difficult to follow and track the changes when updating the image arrangement as the query image changes. For many information visualization applications, showing the transition when interacting with the data is critically important as it can help users better perceive the changes and understand the underlying data. This work investigates the effectiveness of animated transition in a tiled image layout where the spiral arrangement of the images is based on their similarity. Three aspects of animated transition are considered, including animation steps, animation actions, and flying paths. Exploring and weighting the advantages and disadvantages of different methods for each aspect and in conjunction with the characteristics of the spiral image layout, we present an integrated solution, called AniMap, for animating the transition from an old layout to a new layout when a different image is selected as the query image. To smooth the animation and reduce the overlap among images during the transition, we explore different factors that might have an impact on the animation and propose our solution accordingly. We show the effectiveness of our animated transition solution by demonstrating experimental results and conducting a comparative user study.
Resumo:
Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
The paper discusses new business models of transmission of television programs in the context of definitions of broadcasting and retransmission. Typically the whole process of supplying content to the end user has two stages: a media service provider supplies a signal assigned to a given TV channel to the cable operators and satellite DTH platform operators (dedicated transmission), and cable operators and satellite DTH platform operators transmit this signal to end users. In each stage the signals are encoded and are not available for the general public without the intervention of cable/platform operators. The services relating to the supply and transmission of the content are operated by different business entities: each earns money separately and each uses the content protected by copyright. We should determine how to define the actions of the entity supplying the signal with the TV program directly to the cable/digital platform operator and the actions of the entity providing the end user with the signal. The author criticizes the approach presented in the Chellomedia and Norma rulings, arguing that they lead to a significant level of legal uncertainty, and poses the basic questions concerning the notion of “public” in copyright.
Resumo:
The competitive industrial context compels companies to speed-up every new product design. In order to keep designing products that meet the needs of the end user, a human centered concurrent product design methodology has been proposed. Its setting up is complicated by the difficulties of collaboration between experts involved inthe design process. In order to ease this collaboration, we propose the use of virtual reality as an intermediate design representation in the form of light and specialized immersive convergence support applications. In this paper, we present the As Soon As Possible (ASAP) methodology making possible the development of these tools while ensuring their usefulness and usability. The relevance oft his approach is validated by an industrial use case through the design of an ergonomic-style convergence support tool.
Resumo:
The increasing practice of offshore outsourcing software maintenance has posed the challenge of effectively transferring knowledge to individual software engineers of the vendor. In this theoretical paper, we discuss the implications of two learning theories, the model of work-based learning (MWBL) and cognitive load theory (CLT), for knowledge transfer during the transition phase. Taken together, the theories suggest that learning mechanisms need to be aligned with the type of knowledge (tacit versus explicit), task characteristics (complexity and recurrence), and the recipients’ expertise. The MWBL proposes that learning mechanisms need to include conceptual and practical activities based on the relative importance of explicit and tacit knowledge. CLT explains how effective portfolios of learning mechanisms change over time. While jobshadowing, completion tasks, and supportive information may prevail at the outset of transition, they may be replaced by the work on conventional tasks towards the end of transition.
Resumo:
We re-analyze the signal of non-planetary energetic neutral atoms (ENAs) in the 0.4-5.0 keV range measured with the Neutral Particle Detector (NPD) of the ASPERA-3 and ASPERA-4 experiments on board the Mars and Venus Express satellites. Due to improved knowledge of sensor characteristics and exclusion of data sets affected by instrument effects, the typical intensity of the ENA signal obtained by ASPERA-3 is an order of magnitude lower than in earlier reports. The ENA intensities measured with ASPERA-3 and ASPERA-4 now agree with each other. In the present analysis, we also correct the ENA signal for Compton-Getting and for ionization loss processes under the assumption of a heliospheric origin. We find spectral shapes and intensities consistent with those measured by the Interstellar Boundary Explorer (IBEX). The principal advantage of ASPERA with respect to the IBEX sensors is the two times better spectral resolution. In this study, we discuss the physical significance of the spectral shapes and their potential variation across the sky. At present, these observations are the only independent test of the heliospheric ENA signal measured with IBEX in this energy range. The ASPERA measurements also allow us to check for a temporal variation of the heliospheric signal as they were obtained between 2003 and 2007, whereas IBEX has been operational since the end of 2008.