15 resultados para Virtual Reality,Cloud Gaming,Cloud Computing,Client-Server,Android,Unity,Multiutenza

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud Computing is a paradigm that enables the access, in a simple and pervasive way, through the network, to shared and configurable computing resources. Such resources can be offered on demand to users in a pay-per-use model. With the advance of this paradigm, a single service offered by a cloud platform might not be enough to meet all the requirements of clients. Ergo, it is needed to compose services provided by different cloud platforms. However, current cloud platforms are not implemented using common standards, each one has its own APIs and development tools, which is a barrier for composing different services. In this context, the Cloud Integrator, a service-oriented middleware platform, provides an environment to facilitate the development and execution of multi-cloud applications. The applications are compositions of services, from different cloud platforms and, represented by abstract workflows. However, Cloud Integrator has some limitations, such as: (i) applications are locally executed; (ii) users cannot specify the application in terms of its inputs and outputs, and; (iii) experienced users cannot directly determine the concrete Web services that will perform the workflow. In order to deal with such limitations, this work proposes Cloud Stratus, a middleware platform that extends Cloud Integrator and offers different ways to specify an application: as an abstract workflow or a complete/partial execution flow. The platform enables the application deployment in cloud virtual machines, so that several users can access it through the Internet. It also supports the access and management of virtual machines in different cloud platforms and provides services monitoring mechanisms and assessment of QoS parameters. Cloud Stratus was validated through a case study that consists of an application that uses different services provided by different cloud platforms. Cloud Stratus was also evaluated through computing experiments that analyze the performance of its processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The progresses of the Internet and telecommunications have been changing the concepts of Information Technology IT, especially with regard to outsourcing services, where organizations seek cost-cutting and a better focus on the business. Along with the development of that outsourcing, a new model named Cloud Computing (CC) evolved. It proposes to migrate to the Internet both data processing and information storing. Among the key points of Cloud Computing are included cost-cutting, benefits, risks and the IT paradigms changes. Nonetheless, the adoption of that model brings forth some difficulties to decision-making, by IT managers, mainly with regard to which solutions may go to the cloud, and which service providers are more appropriate to the Organization s reality. The research has as its overall aim to apply the AHP Method (Analytic Hierarchic Process) to decision-making in Cloud Computing. There to, the utilized methodology was the exploratory kind and a study of case applied to a nationwide organization (Federation of Industries of RN). The data collection was performed through two structured questionnaires answered electronically by IT technicians, and the company s Board of Directors. The analysis of the data was carried out in a qualitative and comparative way, and we utilized the software to AHP method called Web-Hipre. The results we obtained found the importance of applying the AHP method in decision-making towards the adoption of Cloud Computing, mainly because on the occasion the research was carried out the studied company already showed interest and necessity in adopting CC, considering the internal problems with infrastructure and availability of information that the company faces nowadays. The organization sought to adopt CC, however, it had doubt regarding the cloud model and which service provider would better meet their real necessities. The application of the AHP, then, worked as a guiding tool to the choice of the best alternative, which points out the Hybrid Cloud as the ideal choice to start off in Cloud Computing. Considering the following aspects: the layer of Infrastructure as a Service IaaS (Processing and Storage) must stay partly on the Public Cloud and partly in the Private Cloud; the layer of Platform as a Service PaaS (Software Developing and Testing) had preference for the Private Cloud, and the layer of Software as a Service - SaaS (Emails/Applications) divided into emails to the Public Cloud and applications to the Private Cloud. The research also identified the important factors to hiring a Cloud Computing provider

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose a solution to solve the scalability problem found in collaborative, virtual and mixed reality environments of large scale, that use the hierarchical client-server model. Basically, we use a hierarchy of servers. When the capacity of a server is reached, a new server is created as a sun of the first one, and the system load is distributed between them (father and sun). We propose efficient tools and techniques for solving problems inherent to client-server model, as the definition of clusters of users, distribution and redistribution of users through the servers, and some mixing and filtering operations, that are necessary to reduce flow between servers. The new model was tested, in simulation, emulation and in interactive applications that were implemented. The results of these experimentations show enhancements in the traditional, previous models indicating the usability of the proposed in problems of all-to-all communications. This is the case of interactive games and other applications devoted to Internet (including multi-user environments) and interactive applications of the Brazilian Digital Television System, to be developed by the research group. Keywords: large scale virtual environments, interactive digital tv, distributed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recovering process of oil, rock heterogeneity has a huge impact on how fluids move in the field, defining how much oil can be recovered. In order to study this variability, percolation theory, which describes phenomena involving geometry and connectivity are the bases, is a very useful model. Result of percolation is tridimensional data and have no physical meaning until visualized in form of images or animations. Although a lot of powerful and sophisticated visualization tools have been developed, they focus on generation of planar 2D images. In order to interpret data as they would be in the real world, virtual reality techniques using stereo images could be used. In this work we propose an interactive and helpful tool, named ZSweepVR, based on virtual reality techniques that allows a better comprehension of volumetric data generated by simulation of dynamic percolation. The developed system has the ability to render images using two different techniques: surface rendering and volume rendering. Surface rendering is accomplished by OpenGL directives and volume rendering is accomplished by the Zsweep direct volume rendering engine. In the case of volumetric rendering, we implemented an algorithm to generate stereo images. We also propose enhancements in the original percolation algorithm in order to get a better performance. We applied our developed tools to a mature field database, obtaining satisfactory results. The use of stereoscopic and volumetric images brought valuable contributions for the interpretation and clustering formation analysis in percolation, what certainly could lead to better decisions about the exploration and recovery process in oil fields

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently there is still a high demand for quality control in manufacturing processes of mechanical parts. This keeps alive the need for the inspection activity of final products ranging from dimensional analysis to chemical composition of products. Usually this task may be done through various nondestructive and destructive methods that ensure the integrity of the parts. The result generated by these modern inspection tools ends up not being able to geometrically define the real damage and, therefore, cannot be properly displayed on a computing environment screen. Virtual 3D visualization may help identify damage that would hardly be detected by any other methods. One may find some commercial softwares that seek to address the stages of a design and simulation of mechanical parts in order to predict possible damages trying to diminish potential undesirable events. However, the challenge of developing softwares capable of integrating the various design activities, product inspection, results of non-destructive testing as well as the simulation of damage still needs the attention of researchers. This was the motivation to conduct a methodological study for implementation of a versatile CAD/CAE computer kernel capable of helping programmers in developing softwares applied to the activities of design and simulation of mechanics parts under stress. In this research it is presented interesting results obtained from the use of the developed kernel showing that it was successfully applied to case studies of design including parts presenting specific geometries, namely: mechanical prostheses, heat exchangers and piping of oil and gas. Finally, the conclusions regarding the experience of merging CAD and CAE theories to develop the kernel, so as to result in a tool adaptable to various applications of the metalworking industry are presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Cerebral Vascular Accident (CVA) is the leading cause of motor disability in adults and elderly and that is why it still needs effective interventions that contribute to motor recovery. Objective: This study was aimed to evaluate the performance of stroke patients in chronic stage using a virtual reality game. Method: 20 patients (10 with injury to the left and 10 to the right side), right-handed, average age 50.6 ± 9.2 years, and 20 healthy subjects with average age of 50.9 ± 8.8, also right-handed participated. The patients had a motor (Fugl-Meyer) and muscle tone assessment (Ashworth). All participants made a kinematic evaluation of the drinking water activity and then underwent training with the table tennis game on XBOX 360 Kinect®, 2 sets of 10 attempts for 45 seconds, 15 minutes rest between sets, giving a total of 30 minutes session. After training the subjects underwent another kinematic evaluation. The patients trained with the right and left hemiparect upper limb and the healthy ones with the right and left upper limb. Data were analyzed by ANOVA, t Student test and Pearson correlation. Results: There was significant difference in the number of hits between the patients and healthy groups, in which patients had a lower performance in all the attempts (p = 0.008), this performance was related to a higher level of spasticity (r = - 0.44, p = 0.04) and greater motor impairment (r = 0.59, p = 0.001). After training, patients with left hemiparesis had improved shoulder and elbow angles during the activity of drinking water, approaching the pattern of motion of the left arm of healthy subjects (p < 0.05), especially when returning the glass to the table, and patients with right hemiparesis did not obtain improved pattern of movement (p > 0.05). Conclusion: The stroke patients improved their performance over the game attempts, however, only patients with left hemiparesis were able to increase the angle of the shoulder and elbow during the functional activity execution, better responding to virtual reality game, which should be taken into consideration in motor rehabilitation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The association of Virtual Reality (VR) to clinical practice has become common in the recent years, showing to be an additional tool on health care, especially for elderly. Its use has been related to higher therapeutic adhesion levels and well being sensation. Such emotional based aspects are often observed by subjective tools of relative validity. This study analyzed the immediate effects of varied VR contexts balance training over emotional behavior, which was observed under peaks of maximum expression of EEG waves. Methodology: 40 individuals, divided in two groups, both gender, 20 young and 20 elderly, were submitted to a 60 minutes intervention, including balance training under VR. The first 25 minutes referred to initial evaluation, general orientation and cognitive assessment by the use of Mini Mental. The next ten minutes were designated to the avatar creation and tutorial video presentation. Through the following 20 minutes, the individuals from both groups were exposed to the exact same sequence of games under virtual contexts, while submitted to electroencephalography by Emotiv EPOC® focusing Adhesion, Frustration and Meditation states. The virtual interface was provided by the Nintendo® game, Wii Fit Plus, with the scenarios Balance Bubble (1), Penguin (2), Soccer (3), Tight Rope (4) and Table Tilt (5). Finally, a questionnaire of personal impressions was applied on the 5 minutes left. Results: data collected showed 64,7% of individuals from both groups presented higher concentration of adhesion peaks on Balance Bubble game. Both groups also presented similar behavior regarding meditation state, with marks close to 40%, each, on the same game, Table Tilt. There was divergence related to the frustration state, being the maximum concentration for the young group on the Soccer game (29,3%), whilst the elderly group referred highest marks to Tight Rope game (35,2%). Conclusion: Findings suggest virtual contexts can be favorable to adhesion and meditation emotional patterns induction, regardless age and for both sexes, whilst frustration seems to be more related to cognitive motor affordance, likely to be influenced by age. This information is relevant and contributes to the orientation for the best choice of games applied in clinical practice, as for other studies regarding this topic

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present the methodological procedures involved in the digital imaging in mesoscale of a block of travertines rock of quaternary age, originating from the city of Acquasanta, located in the Apennines, Italy. This rocky block, called T-Block, was stored in the courtyard of the Laboratório Experimental Petróleo "Kelsen Valente" (LabPetro), of Universidade Estadual de Campinas (UNICAMP), so that from it were performed Scientific studies, mainly for research groups universities and research centers working in brazilian areas of reservoir characterization and 3D digital imaging. The purpose of this work is the development of a Model Solid Digital, from the use of non-invasive techniques of digital 3D imaging of internal and external surfaces of the T-Block. For the imaging of the external surfaces technology has been used LIDAR (Light Detection and Range) and the imaging surface Interior was done using Ground Penetrating Radar (GPR), moreover, profiles were obtained with a Gamma Ray Gamae-spectômetro laptop. The goal of 3D digital imaging involved the identification and parameterization of surface geological and sedimentary facies that could represent heterogeneities depositional mesoscale, based on study of a block rocky with dimensions of approximately 1.60 m x 1.60 m x 2.70 m. The data acquired by means of terrestrial laser scanner made available georeferenced spatial information of the surface of the block (X, Y, Z), and varying the intensity values of the return laser beam and high resolution RGB data (3 mm x 3 mm), total points acquired 28,505,106. This information was used as an aid in the interpretation of radargrams and are ready to be displayed in rooms virtual reality. With the GPR was obtained 15 profiles of 2.3 m and 2 3D grids, each with 24 sections horizontal of 1.3 and 14 m vertical sections of 2.3 m, both the Antenna 900 MHz to about 2600 MHz antenna. Finally, the use of GPR associated with Laser Scanner enabled the identification and 3D mapping of 3 different radarfácies which were correlated with three sedimentary facies as had been defined at the outset. The 6 profiles showed gamma a low amplitude variation in the values of radioactivity. This is likely due to the fact of the sedimentary layers profiled have the same mineralogical composition, being composed by carbonate sediments, with no clay in siliciclastic pellitic layers or other mineral carrier elements radioactive

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to understand how cloud computing contextualizes the IT government and decision agenda, in the light of the multiple streams model, considering the current status of public IT policies, the dynamics of the agenda setting for the area, the interface between the various institutions, and existing initiatives on the use of cloud computing in government. Therefore, a qualitative study was conducted through interviews with a group of policy makers and the other group consists of IT managers. As analysis technique, this work made use of content analysis and analysis of documents, with some results by word cloud. As regards the main results to overregulation to the area, usually scattered in various agencies of the federal government, which hinders the performance of the managers. Identified a lack of knowledge of standards, government programs, regulations and guidelines. Among these he highlighted a lack of understanding of the TI Maior Program, the lack of effectiveness of the National Broadband Plan in view of the respondents, as well as the influence of Internet Landmark as an element that can jam the advances in the use of computing cloud in the Brazilian government. Also noteworthy is the bureaucratization of the acquisition of goods to IT services, limited, in many cases, technological advances. Regarding the influence of the actors, it was not possible to identify the presence of a political entrepreneur, and it was noticed a lack of political force. Political flow was affected only by changes within the government. Fragmentation was a major factor for the theme of weakening the agenda formation. Information security was questioned by the respondents pointed out that the main limitation coupled with the lack of training of public servants. In terms of benefits, resource economy is highlighted, followed by improving efficiency. Finally, the discussion about cloud computing needs to advance within the public sphere, whereas the international experience is already far advanced, framing cloud computing as a responsible element for the improvement of processes, services and economy of public resources

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an application of a hybrid Fuzzy-ELECTRE-TOPSIS multicriteria approach for a Cloud Computing Service selection problem. The research was exploratory, using a case of study based on the actual requirements of professionals in the field of Cloud Computing. The results were obtained by conducting an experiment aligned with a Case of Study using the distinct profile of three decision makers, for that, we used the Fuzzy-TOPSIS and Fuzzy-ELECTRE-TOPSIS methods to obtain the results and compare them. The solution includes the Fuzzy sets theory, in a way it could support inaccurate or subjective information, thus facilitating the interpretation of the decision maker judgment in the decision-making process. The results show that both methods were able to rank the alternatives from the problem as expected, but the Fuzzy-ELECTRE-TOPSIS method was able to attenuate the compensatory character existing in the Fuzzy-TOPSIS method, resulting in a different alternative ranking. The attenuation of the compensatory character stood out in a positive way at ranking the alternatives, because it prioritized more balanced alternatives than the Fuzzy-TOPSIS method, a factor that has been proven as important at the validation of the Case of Study, since for the composition of a mix of services, balanced alternatives form a more consistent mix when working with restrictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly, the Information Technology (IT) has been used to sustain the business strategies, causing increased its relevance. Therefore IT governance is seen as one of the priorities of organizations at the time. The search for strategic alignment between business and IT is debated as a factor for business success, but even with that importance, usually the main business managers are reluctant to take responsibility for decisions involving IT, mainly due to the complexity of your infrastructure. Since cloud computing is being seen as an element capable of assisting in the implementation of organizational strategies, because their characteristics enable greater efficiency and agility in IT, and is considered as a new computing paradigm. The main objective of the analyze the relationship between IT governance arrangements and strategic alignment with the infrastructure as a service (IaaS) of public cloud computing. Therefore, an exploratory, descriptive and inferential was developed, with approach to the problem of quantitatively research, with descriptive survey method and cross section. An electronic questionnaire that was applied to the ISACA chapters Associates of São Paulo and the Distrito Federal, totaling 164 respondents was used. The instrument used based on the theories of Weill and Ross (2006) for array of IT governance arrangement; Henderson and Venkatraman (1993) and Luftman (2000), for maturity of the strategic alignment model; and NIST (2011 b), ITGI (2007) and CSA (2010) for infrastructure maturity as a service (IaaS) public in its essential characteristics. As regards the main results, this research proved that with public IaaS decision-making structures have changed, with a greater participation of senior executives in all five key IT decisions (IT governance arrangement array) including more technical decisions as architecture and IT infrastructure. With increased participation of senior executives the decrease was also observed in the share of IT specialists, characterizing the decision process with the duopoly archetype (shared decision). With regard to strategic alignment, it was observed that it changes with cloud computing, and organizations with public IaaS, a maturity of strategic alignment with statistically significant and greater difference when compared to organizations without IaaS. The maturity of public IaaS is at the intermediate level (level 3 - "defined process"), with the elasticity and measurement achieved level 4 - "managed and measurable" It was also possible to infer in organizations with public IaaS, there are positive correlations between the key decisions and the maturity of IaaS, especially at the beginning, architecture and infrastructure, and the archetypes involving senior executives and IT specialists. In the correlation between the maturity and mature strategic alignment of public IaaS therefore the higher the strategic alignment, the greater the maturity of the public IaaS and vice versa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stroke is the leading cause of long-term disability among adults and motor relearning is essential in motor sequelae recovery. Therefore, various techniques have been proposed to achieve this end, among them Virtual Reality. The aim of the study was to evaluate electroencephalographic activity of stroke patients in motor learning of a virtual reality-based game. The study included 10 patients with chronic stroke, right-hande; 5 with left brain injury (LP), mean age 48.8 years (± 4.76) and 5 with injury to the right (RP), mean age 52 years (± 10.93). Participants were evaluated for electroencephalographic (EEG) activity and performance while performing 15 repetitions of darts game in XBOX Kinect and also through the NIHSS, MMSE, Fugl-Meyer and the modified Ashworth scale. Patients underwent a trainning with 45 repetitions of virtual darts game, 12 sessions in four weeks. After training, patients underwent reassessment of EEG activity and performance in virtual game of darts (retention). Data were analyzed using ANOVA for repeated measures. According to the results, there were differences between the groups (PD and PE) in frequencies Low Alpha (p = 0.0001), High Alpha (p = 0.0001) and Beta (p = 0.0001). There was an increase in alpha activation powers and a decrease in beta in the phase retention of RP group. In LP group was observed increased alpha activation potency, but without decrease in beta activation. Considering the asymmetry score, RP group increased brain activation in the left hemisphere with the practice in the frontal areas, however, LP group had increased activation of the right hemisphere in fronto-central areas, temporal and parietal. As for performance, it was observed a decrease in absolute error in the game for RP group between assessment and retention (p = 0.015), but this difference was not observed for LP group (p = 0.135). It follows then that the right brain injury patients benefited more from darts game training in the virtual environment with respect to the motor learning process, reducing neural effort in ipsilesionais areas and errors with the practice of the task. In contrast, patients with lesions in left hemisphere decrease neural effort in contralesionais areas important for motor learning and showed no performance improvements with practice of 12 sessions of virtual dart game. Thus, the RV can be used in rehabilitation of stroke patients upper limb, but the laterality of the injury should be considered in programming the motor learning protocol.