936 resultados para Multiple IaaS Interoperable Management
Resumo:
Digital Rights Management Systems (DRMS) are seen by content providers as the appropriate tool to, on the one hand, fight piracy and, on the other hand, monetize their assets. Although these systems claim to be very powerful and include multiple protection technologies, there is a lack of understanding about how such systems are currently being implemented and used by content providers. The aim of this paper is twofold. First, it provides a theoretical basis through which we present shortly the seven core protection technologies of a DRMS. Second, this paper provides empirical evidence that the seven protection technologies outlined in the first section of this paper are the most commonly used technologies. It further evaluates to what extent these technologies are being used within the music and print industry. It concludes that the three main Technologies are encryption, password, and payment systems. However, there are some industry differences: the number of protection technologies used, the requirements for a DRMS, the required investment, or the perceived success of DRMS in fighting piracy.
Resumo:
Technology advances in hardware, software and IP-networks such as the Internet or peer-to-peer file sharing systems are threatening the music business. The result has been an increasing amount of illegal copies available on-line as well as off-line. With the emergence of digital rights management systems (DRMS), the music industry seems to have found the appropriate tool to simultaneously fight piracy and to monetize their assets. Although these systems are very powerful and include multiple technologies to prevent piracy, it is as of yet unknown to what extent such systems are currently being used by content providers. We provide empirical analyses, results, and conclusions related to digital rights management systems and the protection of digital content in the music industry. It shows that most content providers are protecting their digital content through a variety of technologies such as passwords or encryption. However, each protection technology has its own specific goal, and not all prevent piracy. The majority of the respondents are satisfied with their current protection but want to reinforce it for the future, due to fear of increasing piracy. Surprisingly, although encryption is seen as the core DRM technology, only few companies are currently using it. Finally, half of the respondents do not believe in the success of DRMS and their ability to reduce piracy.
Resumo:
Environmental quality monitoring of water resources is challenged with providing the basis for safeguarding the environment against adverse biological effects of anthropogenic chemical contamination from diffuse and point sources. While current regulatory efforts focus on monitoring and assessing a few legacy chemicals, many more anthropogenic chemicals can be detected simultaneously in our aquatic resources. However, exposure to chemical mixtures does not necessarily translate into adverse biological effects nor clearly shows whether mitigation measures are needed. Thus, the question which mixtures are present and which have associated combined effects becomes central for defining adequate monitoring and assessment strategies. Here we describe the vision of the international, EU-funded project SOLUTIONS, where three routes are explored to link the occurrence of chemical mixtures at specific sites to the assessment of adverse biological combination effects. First of all, multi-residue target and non-target screening techniques covering a broader range of anticipated chemicals co-occurring in the environment are being developed. By improving sensitivity and detection limits for known bioactive compounds of concern, new analytical chemistry data for multiple components can be obtained and used to characterise priority mixtures. This information on chemical occurrence will be used to predict mixture toxicity and to derive combined effect estimates suitable for advancing environmental quality standards. Secondly, bioanalytical tools will be explored to provide aggregate bioactivity measures integrating all components that produce common (adverse) outcomes even for mixtures of varying compositions. The ambition is to provide comprehensive arrays of effect-based tools and trait-based field observations that link multiple chemical exposures to various environmental protection goals more directly and to provide improved in situ observations for impact assessment of mixtures. Thirdly, effect-directed analysis (EDA) will be applied to identify major drivers of mixture toxicity. Refinements of EDA include the use of statistical approaches with monitoring information for guidance of experimental EDA studies. These three approaches will be explored using case studies at the Danube and Rhine river basins as well as rivers of the Iberian Peninsula. The synthesis of findings will be organised to provide guidance for future solution-oriented environmental monitoring and explore more systematic ways to assess mixture exposures and combination effects in future water quality monitoring.
Resumo:
Geographic health planning analyses, such as service area calculations, are hampered by a lack of patient-specific geographic data. Using the limited patient address information in patient management systems, planners analyze patient origin based on home address. But activity space research done sparingly in public health and extensively in non-health related arenas uses multiple addresses per person when analyzing accessibility. Also, health care access research has shown that there are many non-geographic factors that influence choice of provider. Most planning methods, however, overlook non-geographic factors influencing choice of provider, and the limited data mean the analyses can only be related to home address. This research attempted to determine to what extent geography plays a part in patient choice of provider and to determine if activity space data can be used to calculate service areas for primary care providers. ^ During Spring 2008, a convenience sample of 384 patients of a locally-funded Community Health Center in Houston, Texas, completed a survey that asked about what factors are important when he or she selects a health care provider. A subset of this group (336) also completed an activity space log that captured location and time data on the places where the patient regularly goes. ^ Survey results indicate that for this patient population, geography plays a role in their choice of health care provider, but it is not the most important reason for choosing a provider. Other factors for choosing a health care provider such as the provider offering "free or low cost visits", meeting "all of the patient's health care needs", and seeing "the patient quickly" were all ranked higher than geographic reasons. ^ Analysis of the patient activity locations shows that activity spaces can be used to create service areas for a single primary care provider. Weighted activity-space-based service areas have the potential to include more patients in the service area since more than one location per patient is used. Further analysis of the logs shows that a reduced set of locations by time and type could be used for this methodology, facilitating ongoing data collection for activity-space-based planning efforts. ^
Resumo:
The current literature available on bladder cancer symptom management from the perspective of the patients themselves is limited. There is also limited psychosocial research specific to bladder cancer patients and no previous studies have developed and validated measures for bladder cancer patients’ symptom management self-efficacy. The purpose of this study was to investigate non-muscle invasive bladder cancer patients’ health related quality of life through two main study objectives: (1) to describe the treatment related symptoms, reported effectiveness of symptom-management techniques, and the advice a sample of non-muscle invasive bladder cancer patients would convey to physicians and future patients; and (2) to evaluate Lepore’s symptom management self-efficacy measure on a sample of non-muscle invasive bladder cancer patients. Methods. A total of twelve (n=12) non-muscle invasive bladder cancer patients participated in an in-depth interview and a sample of 46 (n=4) non-muscle invasive bladder cancer patients participated in the symptom-management self-efficacy survey. Results. A total of five symptom categories emerged for the participants’ 59 reported symptoms. Four symptom management categories emerged out of the 71 reported techniques. A total of 62% of the participants’ treatment related symptom-management techniques were reported as effective in managing their treatment-related symptoms. Five advice categories emerged out of the in-depth interviews: service delivery; medical advice; physician-patient communication; encouragement; and no advice. An exploratory factor analysis indicated a single-factor structure for the total population and a multiple factor structure for three subgroups: all males, married males, and all married participants. Conclusion. These findings can inform physicians and patients of effective symptom-management techniques thus improving patients’ health-related quality of life. The advice these patients’ impart can improve service-delivery and patient education.^
Resumo:
Objective. Weight gain after cancer treatment is associated with breast cancer recurrence. In order to prolong cancer-free survivorship, interventions to manage post-diagnosis weight are sometimes conducted. However, little is known about what factors are associated with weight management behaviors among cancer survivors. In this study, we examined associations of demographic, clinical, and psychosocial variables with weight management behaviors in female breast cancer survivors. We also examined whether knowledge about post-diagnosis weight gain and its risk is associated with weight management behaviors. ^ Methods. 251 female breast cancer survivors completed an internet survey. They reported current performance of three weight management behaviors (general weight management, physical activity, and healthy diet). We also measured attitude, elf-efficacy, knowledge and social support regarding these behaviors along with demographic and clinical characteristics. ^ Results. Multiple regression models for the weight management behaviors explained 17% of the variance in general weight management, 45% in physical activity and 34% in healthy dieting. The models had 9–14 predictor variables which differed in each model. The variables associated with all three behaviors were social support and self-efficacy. Self-efficacy showed the strongest contribution in all models. The knowledge about weight gain and its risks was not associated with any weight management behaviors. However, women who obtained the knowledge during cancer treatment were more likely to engage in physical activity and healthy dieting. ^ Conclusions. The findings suggest that an intervention designed to increase their self-efficacy to manage weight, to be physically active, to eat healthy will effectively promote survivors to engage in these behaviors. Knowledge may motivate women to manage post-diagnosis weight about risk if information is provided during cancer treatment.^
Resumo:
Scholars agree that governance of the public environment entails cooperation between science, policy and society. This requires the active role of public managers as catalysts of knowledge co-production, addressing participatory arenas in relation to knowledge integration and social learning. This paper deals with the question of whether public managers acknowledge and take on this task. A survey accessing Directors of Environmental Offices (EOs) of 64 municipalities was carried out in parallel for two regions - Tuscany (Italy) and Porto Alegre Metropolitan Region (Brazil). The survey data were analysed using the multiple correspondence method. Results showed that, regarding policy practices, EOs do not play the role of knowledge co-production catalysts, since when making environmental decisions they only use technical knowledge. We conclude that there is a gap between theory and practice, and identify some factors that may hinder local environmental managers in acting as catalyst of knowledge co-production, raising a further question for future research.
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.
Resumo:
Automatic visual object counting and video surveillance have important applications for home and business environments, such as security and management of access points. However, in order to obtain a satisfactory performance these technologies need professional and expensive hardware, complex installations and setups, and the supervision of qualified workers. In this paper, an efficient visual detection and tracking framework is proposed for the tasks of object counting and surveillance, which meets the requirements of the consumer electronics: off-the-shelf equipment, easy installation and configuration, and unsupervised working conditions. This is accomplished by a novel Bayesian tracking model that can manage multimodal distributions without explicitly computing the association between tracked objects and detections. In addition, it is robust to erroneous, distorted and missing detections. The proposed algorithm is compared with a recent work, also focused on consumer electronics, proving its superior performance.
Resumo:
Software Configuration Management (SCM) techniques have been considered the entry point to rigorous software engineering, where multiple organizations cooperate in a decentralized mode to save resources, ensure the quality of the diversity of software products, and manage corporate information to get a better return of investment. The incessant trend of Global Software Development (GSD) and the complexity of implementing a correct SCM solution grow not only because of the changing circumstances, but also because of the interactions and the forces related to GSD activities. This paper addresses the role SCM plays in the development of commercial products and systems, and introduces a SCM reference model to describe the relationships between the different technical, organizational, and product concerns any software development company should support in the global market.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.
Resumo:
Cloud computing and, more particularly, private IaaS, is seen as a mature technol- ogy with a myriad solutions to choose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock- in. Several competing and incompatible interfaces and management styles have increased even more these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this Master Thesis I propose a management architecture that tries to solve these problems; it provides a generalized control mechanism for several cloud infrastructures, and an interface that can meet the requirements of the users. This management architecture is designed in a modular way, and using a generic infor- mation model. I have validated the approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack.
Resumo:
This paper presents an operational concept for Air Traffic Management, and in particular arrival management, in which aircraft are permitted to operate in a manner consistent with current optimal aircraft operating techniques. The proposed concept allows aircraft to descend in the fuel efficient path managed mode and with arrival time not actively controlled. It will be demonstrated how the associated uncertainty in the time dimension of the trajectory can be managed through the application of multiple metering points strategically chosen along the trajectory. The proposed concept does not make assumptions on aircraft equipage (e.g. time of arrival control), but aims at handling mixed-equipage scenarios that most likely will remain far into the next decade and arguably beyond.