910 resultados para Long-Polling, GCM, Google Cloud Messaging, RESTful Web services, Push, Notifiche
Resumo:
PURPOSE: To evaluate the ratio of soft tissue to hard tissue in bilateral sagittal split setback osteotomy with rigid internal fixation or wire fixation. MATERIALS AND METHODS: A literature search was performed using PubMed, Medline, CINAHL, Web of Science, the Cochrane Library, and Google Scholar Beta. From the original 766 articles identified, 8 articles were included. Two articles were prospective and 6 retrospective. The follow-up period ranged from 1 year to 12.7 years for rigid internal fixation. Two articles on wire fixation were found to be appropriate for inclusion. RESULTS: The differences between short- and long-term ratios of the lower lip to lower incisors for bilateral sagittal split setback osteotomy with rigid internal fixation or wire fixation were quite small. The ratio was 1:1 in the long term and by trend slightly lower in the short term. No distinction was seen between the short- and long-term ratios for mentolabial fold. The ratio was found to be 1:1 for the mentolabial fold to point B. In the short term, the ratio of the soft tissue pogonion to the pogonion showed a 1:1 ratio, with a trend to be lower in the long term. The upper lip showed mainly protrusion, but the amount was highly variable. CONCLUSIONS: This systematic review shows that evidence-based conclusions on soft tissue changes are difficult to draw. This is mostly because of inherent problems of retrospective studies, inferior study designs, and the lack of standardized outcome measurements. Well-designed prospective studies with sufficient samples and excluding additional surgery, ie, genioplasty or maxillary surgery, are needed.
Resumo:
PURPOSE: The purpose of the present systematic review was to evaluate the soft tissue/hard tissue ratio in bilateral sagittal split advancement osteotomy (BSSO) with rigid internal fixation (RIF) or wire fixation (WF). MATERIALS AND METHODS: The databases PubMed, Medline, CINAHL, Web of Science, Cochrane Library, and Google Scholar Beta were searched. From the original 711 articles identified, 12 were finally included. Only 3 studies were prospective and 9 were retrospective. The postoperative follow-up ranged from 3 months to 12.7 years for RIF and 6 months to 5 years for WF. RESULTS: The short- and long-term ratios for the lower lip to lower incisor for BSSO with RIF or WF were 50%. No difference between the short- and long-term ratios for the mentolabial-fold to point B and soft tissue pogonion to pogonion could be observed. It was a 1:1 ratio. One exception was seen for the long-term results of the soft tissue pogonion to pogonion in BSSO with RIF; they tended to be greater than a 1:1 ratio. The upper lip mainly showed retrusion but with high variability. CONCLUSIONS: Despite a large number of studies on the short- and long-term effects of mandibular advancement by BSSO, the results of the present systematic review have shown that evidence-based conclusions on soft tissue changes are still unknown. This is mostly because of the inherent problems of retrospective studies, inferior study designs, and the lack of standardized outcome measures. Well-designed prospective studies with sufficient sample sizes that have excluded patients undergoing additional surgery (ie, genioplasty or maxillary surgery) are needed.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
The spectacular advances computer science applied to geographic information systems (GIS) in recent times has favored the emergence of several technological solutions. These developments have given rise to enormous opportunities for digital management of the territory. Among the technological solutions, the most famous Google Maps offers free online mapping dynamic exhaustive of the Maps. In addition to meet the enormous needs of urban indicators geotagged information, we did work on this project “Integration of an urban observatory on Google Maps.” The problem of geolocation in the urban observatory is particularly relevant in the sense that there is currently no data (descriptive and geographical) reliable on the urban sector; we must stick to extrapolate from data old and obsolete. This helps to curb the effectiveness of urban management to make difficult investment programming and to prevent the acquisition of knowledge to make cities engines of growth. The use of a geolocation tool coupled to the data would allow better monitoring of indicators Our project's objective is to develop an interactive map server (WebMapping) which map layer is formed from the resources of the Google Maps servers and match information from the field to produce maps of urban equipment and infrastructure of a city data to the client's request To achieve this goal, we will participate in a study of a GPS location of strategic sites in our core sector (health facilities), on the other hand, using information from the field, we will build a postgresql database that will link the information from the field to map from Google Maps via KML scripts and PHP appropriate. We will limit ourselves in our work to the city of Douala Cameroon with the sectors of health facilities with the possibility of extension to other areas and other cities. Keywords: Geographic Information System (GIS), Thematic Mapping, Web Mapping, data mining, Google API.
Resumo:
Understanding clouds and their role in climate depends in part on our ability to understand how individual cloud particles respond to environmental conditions. Keeping this objective in mind, a quadrupole trap with thermodynamic control has been designed and constructed in order to create an environment conducive to studying clouds in the laboratory. The quadrupole trap allows a single cloud particle to be suspended for long times. The temperature and water vapor saturation ratio near the trapped particle is controlled by the flow of saturated air through a tube with a discontinuous wall temperature. The design has the unique aspect that the quadrupole electrodes are submerged in heat transfer fluid, completely isolated from the cylindrical levitation volume. This fluid is used in the thermodynamic system to cool the chamber to realistic cloud temperatures, and a heated section of the tube provides for the temperature discontinuity. Thus far, charged water droplets, ranging from about 30-70 microns in diameter have been levitated. In addition, the thermodynamic system has been shown to create the necessary thermal conditions that will create supersaturated conditions in subsequent experiments. These advances will help lead to the next generation of ice nucleation experiments, moving from hemispherical droplets on a substrate to a spherical droplet that is not in contact with any surface.
Resumo:
The long-awaited verdict by the German Federal Court of Justice towards Google image search has drawn much attention to the problem of copyright infringement by search engines on the Internet. In the past years the question has arose whether the listing itself in a search engine like Google can be an infringement of copyright. The decision is widely seen as one of the most important of the last years. With significant amount of effort, the German Fede- ral Court tried to balance the interests of the right holders and those of the digital reality.
Resumo:
The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.
Resumo:
Virtualisation of cellular networks can be seen as a way to significantly reduce the complexity of processes, required nowadays to provide reliable cellular networks. The Future Communication Architecture for Mobile Cloud Services: Mobile Cloud Networking (MCN) is a EU FP7 Large-scale Integrating Project (IP) funded by the European Commission that is focusing on cloud computing concepts to achieve virtualisation of cellular networks. It aims at the development of a fully cloud-based mobile communication and application platform, or more specifically, it aims to investigate, implement and evaluate the technological foundations for the mobile communication system of Long Term Evolution (LTE), based on Mobile Network plus Decentralized Computing plus Smart Storage offered as one atomic service: On-Demand, Elastic and Pay-As-You-Go. This paper provides a brief overview of the MCN project and discusses the challenges that need to be solved.
Resumo:
OBJECTIVES: To determine the characteristics of popular breast cancer related websites and whether more popular sites are of higher quality. DESIGN: The search engine Google was used to generate a list of websites about breast cancer. Google ranks search results by measures of link popularity---the number of links to a site from other sites. The top 200 sites returned in response to the query "breast cancer" were divided into "more popular" and "less popular" subgroups by three different measures of link popularity: Google rank and number of links reported independently by Google and by AltaVista (another search engine). MAIN OUTCOME MEASURES: Type and quality of content. RESULTS: More popular sites according to Google rank were more likely than less popular ones to contain information on ongoing clinical trials (27% v 12%, P=0.01 ), results of trials (12% v 3%, P=0.02), and opportunities for psychosocial adjustment (48% v 23%, P<0.01). These characteristics were also associated with higher number of links as reported by Google and AltaVista. More popular sites by number of linking sites were also more likely to provide updates on other breast cancer research, information on legislation and advocacy, and a message board service. Measures of quality such as display of authorship, attribution or references, currency of information, and disclosure did not differ between groups. CONCLUSIONS: Popularity of websites is associated with type rather than quality of content. Sites that include content correlated with popularity may best meet the public's desire for information about breast cancer.
Resumo:
BACKGROUND The majority of radiological reports are lacking a standard structure. Even within a specialized area of radiology, each report has its individual structure with regards to details and order, often containing too much of non-relevant information the referring physician is not interested in. For gathering relevant clinical key parameters in an efficient way or to support long-term therapy monitoring, structured reporting might be advantageous. OBJECTIVE Despite of new technologies in medical information systems, medical reporting is still not dynamic. To improve the quality of communication in radiology reports, a new structured reporting system was developed for abdominal aortic aneurysms (AAA), intended to enhance professional communication by providing the pertinent clinical information in a predefined standard. METHODS Actual state analysis was performed within the departments of radiology and vascular surgery by developing a Technology Acceptance Model. The SWOT (strengths, weaknesses, opportunities, and threats) analysis focused on optimization of the radiology reporting of patients with AAA. Definition of clinical parameters was achieved by interviewing experienced clinicians in radiology and vascular surgery. For evaluation, a focus group (4 radiologists) looked at the reports of 16 patients. The usability and reliability of the method was validated in a real-world test environment in the field of radiology. RESULTS A Web-based application for radiological "structured reporting" (SR) was successfully standardized for AAA. Its organization comprises three main categories: characteristics of pathology and adjacent anatomy, measurements, and additional findings. Using different graphical widgets (eg, drop-down menus) in each category facilitate predefined data entries. Measurement parameters shown in a diagram can be defined for clinical monitoring and be adducted for quick adjudications. Figures for optional use to guide and standardize the reporting are embedded. Analysis of variance shows decreased average time required with SR to obtain a radiological report compared to free-text reporting (P=.0001). Questionnaire responses confirm a high acceptance rate by the user. CONCLUSIONS The new SR system may support efficient radiological reporting for initial diagnosis and follow-up for AAA. Perceived advantages of our SR platform are ease of use, which may lead to more accurate decision support. The new system is open to communicate not only with clinical partners but also with Radiology Information and Hospital Information Systems.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
One of the current challenges in evolutionary ecology is understanding the long-term persistence of contemporary-evolving predator–prey interactions across space and time. To address this, we developed an extension of a multi-locus, multi-trait eco-evolutionary individual-based model that incorporates several interacting species in explicit landscapes. We simulated eco-evolutionary dynamics of multiple species food webs with different degrees of connectance across soil-moisture islands. A broad set of parameter combinations led to the local extinction of species, but some species persisted, and this was associated with (1) high connectance and omnivory and (2) ongoing evolution, due to multi-trait genetic variability of the embedded species. Furthermore, persistence was highest at intermediate island distances, likely because of a balance between predation-induced extinction (strongest at short island distances) and the coupling of island diversity by top predators, which by travelling among islands exert global top-down control of biodiversity. In the simulations with high genetic variation, we also found widespread trait evolutionary changes indicative of eco-evolutionary dynamics. We discuss how the ever-increasing computing power and high-resolution data availability will soon allow researchers to start bridging the in vivo–in silico gap.
Resumo:
We present observations of total cloud cover and cloud type classification results from a sky camera network comprising four stations in Switzerland. In a comprehensive intercomparison study, records of total cloud cover from the sky camera, long-wave radiation observations, Meteosat, ceilometer, and visual observations were compared. Total cloud cover from the sky camera was in 65–85% of cases within ±1 okta with respect to the other methods. The sky camera overestimates cloudiness with respect to the other automatic techniques on average by up to 1.1 ± 2.8 oktas but underestimates it by 0.8 ± 1.9 oktas compared to the human observer. However, the bias depends on the cloudiness and therefore needs to be considered when records from various observational techniques are being homogenized. Cloud type classification was conducted using the k-Nearest Neighbor classifier in combination with a set of color and textural features. In addition, a radiative feature was introduced which improved the discrimination by up to 10%. The performance of the algorithm mainly depends on the atmospheric conditions, site-specific characteristics, the randomness of the selected images, and possible visual misclassifications: The mean success rate was 80–90% when the image only contained a single cloud class but dropped to 50–70% if the test images were completely randomly selected and multiple cloud classes occurred in the images.
Resumo:
Objective. The purpose of this study was to determine the meaning of personal transformation for twenty women in long term, stable recovery from alcohol abuse; to identify themes or patterns of this recovery, and; to determine the extent to which they experienced the phenomenon of perspective transformation. ^ Method. Volunteers were recruited by advertisement, word of mouth, and through a closed circuit web based broadcast. A descriptive, exploratory study, which analyzed perspective transformation from the standpoint of five action phases, was conducted. Data was collected using in-depth personal interviews and questionnaires. Subjects' responses were analyzed by qualitative methods. Triangulation was performed on the grouped data comparing the interviews to the data produced by the questionnaires. Quantitative analysis of questionnaire items explored behavioral changes experienced before and after alcoholism recovery. ^ Results. Five phases of recovery were identified. Phase I which involved recognition that alcohol was a problem and change might be possible took several years during which 3 major transitions occurred: (1) from often being alienated to having relationships with family and friends; (2) from daily upheavals to eventually a more peaceful existence, and; (3) from denial that alcohol was a problem to acceptance and willingness to change. Recovery was often seen in a spiritual context, which also required ongoing support. During Phase II there was an assessment of self, others, and the environment which revealed a pattern of intense unhappiness and negative feelings toward self and others with a disregard for cultural norms. Phase III revealed a period of desperation as life became unmanageable, but gradual willingness to accept support and guidance and a desire to improve self and help others. This led to improvement of existing role performance and the willingness to try out new roles. In Phase IV there was a pattern of personal growth which included: the establishment of boundaries, setting priorities, a willingness to place others' needs above their own, acceptance of responsibility, and learning to cope without alcohol, often with the use of tools learned in AA. During Phase V, many experienced knowledge of frailties but growing respect for self and others, with an improved ability to function in giving relationships. Implications for Prevention and Recovery: Early education concerning addiction and recovery may play a crucial role in prevention and early recovery, as it did for children of women in this study. Recovery requires persistent effort and organized support. ^
Resumo:
The State of Connecticut owns a LIght Detection and Ranging (LIDAR) data set that was collected in 2000 as part of the State’s periodic aerial reconnaissance missions. Although collected eight years ago, these data are just now becoming ready to be made available to the public. These data constitute a massive “point cloud”, being a long list of east-north-up triplets in the State Plane Coordinate System Zone 0600 (SPCS83 0600), orthometric heights (NAVD 88) in US Survey feet. Unfortunately, point clouds have no structure or organization, and consequently they are not as useful as Triangulated Irregular Networks (TINs), digital elevation models (DEMs), contour maps, slope and aspect layers, curvature layers, among others. The goal of this project was to provide the computational infrastructure to create a first cut of these products and to serve them to the public via the World Wide Web. The products are available at http://clear.uconn.edu/data/ct_lidar/index.htm.