960 resultados para Mixed Reality framework
Resumo:
Dissertation presented to Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa for obtaining the master degree in Membrane Engineering
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
As world communication, technology, and trade become increasingly integrated through globalization, multinational corporations seek employees with global leadership experience and skills. However, the demand for these skills currently outweighs the supply. Given the rarity of globally ready leaders, global competency development should be emphasized in higher education programs. The reality, however, is that university graduate programs are often outdated and focus mostly on cognitive learning. Global leadership competence requires moving beyond the cognitive domain of learning to create socially responsible and culturally connected global leaders. This requires attention to development methods; however, limited research in global leadership development methods has been conducted. A new conceptual model, the global leadership development ecosystem, was introduced in this study to guide the design and evaluation of global leadership development programs. It was based on three theories of learning and was divided into four development methodologies. This study quantitatively tested the model and used it as a framework for an in-depth examination of the design of one International MBA program. The program was first benchmarked, by means of a qualitative best practices analysis, against the top-ranking IMBA programs in the world. Qualitative data from students, faculty, administrators, and staff was then examined, using descriptive and focused data coding. Quantitative data analysis, using PASW Statistics software, and a hierarchical regression, showed the individual effect of each of the four development methods, as well as their combined effect, on student scores on a global leadership assessment. The analysis revealed that each methodology played a distinct and important role in developing different competencies of global leadership. It also confirmed the critical link between self-efficacy and global leadership development.
Resumo:
Background Motivated patients are more likely to adhere to treatment resulting in better outcomes. Virtual reality rehabilitation (VRR) is a treatment approach that includes video gaming to enhance motivation and functional training. Aims The study objectives were (1) to evaluate the feasibility of using a combination of pelvic floor muscles (PFM) exercises and VRR (PFM/VRR) to treat mixed urinary incontinence (MUI) in older women, (2) to evaluate the effectiveness of the PFM/VRR program on MUI symptoms, quality of life (QoL), and (3) gather quantitative information regarding patient satisfaction with this new combined training program. Methods Women 65 years and older with at least 2 weekly episodes of MUI were recruited. Participants were evaluated two times before and one time after a 12-week PFM/VRR training program. Feasibility was defined as the participants' rate of participation in and completion of both the PFM/VRR training program and the home exercise. Effectiveness was evaluated through a bladder diary, pad test, symptom and QoL questionnaire, and participant's satisfaction through a questionnaire. Results Twenty-four women (70.5 ± 3.6 years) participated. The participants complied with the study demands in terms of attendance at the weekly treatment sessions (91%), adherence to home exercise (92%) and completion of the three evaluations (96%). Post-intervention, the frequency and quantity of urine leakage decreased and patientreported symptoms and QoL improved significantly. Most participants were very satisfied with treatment (91%). Conclusion A combined PFM/VRR program is an acceptable, efficient, and satisfying functional treatment for older women with MUI and should be explore through further RCTs.
Resumo:
Background Motivated patients are more likely to adhere to treatment resulting in better outcomes. Virtual reality rehabilitation (VRR) is a treatment approach that includes video gaming to enhance motivation and functional training. Aims The study objectives were (1) to evaluate the feasibility of using a combination of pelvic floor muscles (PFM) exercises and VRR (PFM/VRR) to treat mixed urinary incontinence (MUI) in older women, (2) to evaluate the effectiveness of the PFM/VRR program on MUI symptoms, quality of life (QoL), and (3) gather quantitative information regarding patient satisfaction with this new combined training program. Methods Women 65 years and older with at least 2 weekly episodes of MUI were recruited. Participants were evaluated two times before and one time after a 12-week PFM/VRR training program. Feasibility was defined as the participants' rate of participation in and completion of both the PFM/VRR training program and the home exercise. Effectiveness was evaluated through a bladder diary, pad test, symptom and QoL questionnaire, and participant's satisfaction through a questionnaire. Results Twenty-four women (70.5 ± 3.6 years) participated. The participants complied with the study demands in terms of attendance at the weekly treatment sessions (91%), adherence to home exercise (92%) and completion of the three evaluations (96%). Post-intervention, the frequency and quantity of urine leakage decreased and patientreported symptoms and QoL improved significantly. Most participants were very satisfied with treatment (91%). Conclusion A combined PFM/VRR program is an acceptable, efficient, and satisfying functional treatment for older women with MUI and should be explore through further RCTs.
Design and Development of a Research Framework for Prototyping Control Tower Augmented Reality Tools
Resumo:
The purpose of the air traffic management system is to ensure the safe and efficient flow of air traffic. Therefore, while augmenting efficiency, throughput and capacity in airport operations, attention has rightly been placed on doing it in a safe manner. In the control tower, many advances in operational safety have come in the form of visualization tools for tower controllers. However, there is a paradox in developing such systems to increase controllers' situational awareness: by creating additional computer displays, the controller's vision is pulled away from the outside view and the time spent looking down at the monitors is increased. This reduces their situational awareness by forcing them to mentally and physically switch between the head-down equipment and the outside view. This research is based on the idea that augmented reality may be able to address this issue. The augmented reality concept has become increasingly popular over the past decade and is being proficiently used in many fields, such as entertainment, cultural heritage, aviation, military & defense. This know-how could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Research on this topic is consistent with SESAR objectives of increasing air traffic controllers’ situation awareness and enable up to 10 % of additional flights at congested airports while still increasing safety and efficiency. During the Ph.D., a research framework for prototyping augmented reality tools was set up. This framework consists of methodological tools for designing the augmented reality overlays, as well as of hardware and software equipment to test them. Several overlays have been designed and implemented in a simulated tower environment, which is a virtual reconstruction of Bologna airport control tower. The positive impact of such tools was preliminary assessed by means of the proposed methodology.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
This paper presents a framework to build medical training applications by using virtual reality and a tool that helps the class instantiation of this framework. The main purpose is to make easier the building of virtual reality applications in the medical training area, considering systems to simulate biopsy exams and make available deformation, collision detection, and stereoscopy functionalities. The instantiation of the classes allows quick implementation of the tools for such a purpose, thus reducing errors and offering low cost due to the use of open source tools. Using the instantiation tool, the process of building applications is fast and easy. Therefore, computer programmers can obtain an initial application and adapt it to their needs. This tool allows the user to include, delete, and edit parameters in the functionalities chosen as well as storing these parameters for future use. In order to verify the efficiency of the framework, some case studies are presented.
Resumo:
This paper addresses the development of a hybrid-mixed finite element formulation for the quasi-static geometrically exact analysis of three-dimensional framed structures with linear elastic behavior. The formulation is based on a modified principle of stationary total complementary energy, involving, as independent variables, the generalized vectors of stress-resultants and displacements and, in addition, a set of Lagrange multipliers defined on the element boundaries. The finite element discretization scheme adopted within the framework of the proposed formulation leads to numerical solutions that strongly satisfy the equilibrium differential equations in the elements, as well as the equilibrium boundary conditions. This formulation consists, therefore, in a true equilibrium formulation for large displacements and rotations in space. Furthermore, this formulation is objective, as it ensures invariance of the strain measures under superposed rigid body rotations, and is not affected by the so-called shear-locking phenomenon. Also, the proposed formulation produces numerical solutions which are independent of the path of deformation. To validate and assess the accuracy of the proposed formulation, some benchmark problems are analyzed and their solutions compared with those obtained using the standard two-node displacement/ rotation-based formulation.
Resumo:
The Montreal Process indicators are intended to provide a common framework for assessing and reviewing progress toward sustainable forest management. The potential of a combined geometrical-optical/spectral mixture analysis model was assessed for mapping the Montreal Process age class and successional age indicators at a regional scale using Landsat Thematic data. The project location is an area of eucalyptus forest in Emu Creek State Forest, Southeast Queensland, Australia. A quantitative model relating the spectral reflectance of a forest to the illumination geometry, slope, and aspect of the terrain surface and the size, shape, and density, and canopy size. Inversion of this model necessitated the use of spectral mixture analysis to recover subpixel information on the fractional extent of ground scene elements (such as sunlit canopy, shaded canopy, sunlit background, and shaded background). Results obtained fron a sensitivity analysis allowed improved allocation of resources to maximize the predictive accuracy of the model. It was found that modeled estimates of crown cover projection, canopy size, and tree densities had significant agreement with field and air photo-interpreted estimates. However, the accuracy of the successional stage classification was limited. The results obtained highlight the potential for future integration of high and moderate spatial resolution-imaging sensors for monitoring forest structure and condition. (C) Elsevier Science Inc., 2000.
Resumo:
A model for binary mixture adsorption accounting for energetic heterogeneity and intermolecular interactions is proposed in this paper. The model is based on statistical thermodynamics, and it is able to describe molecular rearrangement of a mixture in a nonuniform adsorption field inside a cavity. The Helmholtz free energy obtained in the framework of this approach has upper and lower limits, which define a permissible range in which all possible solutions will be found. One limit corresponds to a completely chaotic distribution of molecules within a cavity, while the other corresponds to a maximum ordered molecular structure. Comparison of the nearly ideal O-2-N-2-zeolite NaX system at ambient temperature with the system Of O-2-N-2-zeolite CaX at 144 K has shown that a decrease of temperature leads to a molecular rearrangement in the cavity volume, which results from the difference in the fluid-solid interactions. The model is able to describe this behavior and therefore allows predicting mixture adsorption more accurately compared to those assuming energetic uniformity of the adsorption volume. Another feature of the model is its ability to correctly describe the negative deviations from Raoult's law exhibited by the O-2-N-2-CaX system at 144 K. Analysis of the highly nonideal CO2-C2H6-zeolite NaX system has shown that the spatial molecular rearrangement in separate cavities is induced by not only the ion-quadrupole interaction of the CO2 molecule but also the significant difference in molecular size and the difference between the intermolecular interactions of molecules of the same species and those of molecules of different species. This leads to the highly ordered structure of this system.
Resumo:
Virtual and augmented reality (VR/AR) are increasingly being used in various business scenarios and are important driving forces in technology development. However the usage of these technologies in the home environment is restricted due to several factors including lack of low-cost (from the client point of view) highperformance solutions. In this paper we present a general client/server rendering architecture based on Real-Time concepts, including support for a wide range of client platforms and applications. The idea of focusing on the real-time behaviour of all components involved in distributed IP-based VR scenarios is new and has not been addressed before, except for simple sub-solutions. This is considered as “the most significant problem with the IP environment” [1]. Thus, the most important contribution of this research will be the holistic approach, in which networking, end-systems and rendering aspects are integrated into a cost-effective infrastructure for building distributed real-time VR applications on IP-based networks.