957 resultados para end user programming


Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'obiettivo della tesi è progettare un'architettura abilitante per scenari smart health, concentrandosi sulla parte end-user (non sulla parte server-cloud), ossia sperimentando l'ambito dei wearable devices e facendo riferimento al binomio fitness-Apple Watch, comodo perchè presente nell'azienda FitStadium che ci fornisce motivazioni, requisiti e goals. Nel primo capitolo si analizzeranno le soluzioni offerte attualmente dal mercato per la realizzazione di servizi legati al fitness, focalizzandosi in particolare sulle architetture proposte e come quest'ultime possano convivere con l'ecosistema FitStadium. Il secondo capitolo è riservato invece all'approfondimento delle tecnologie Apple, che verranno utilizzate concretamente per la realizzazione del caso di studio. Ancora una volta si farà attenzione alle possibilità architetturali offerte da queste tecnologie. Nel terzo capitolo viene trattato nella sua interezza il caso di studio, analizzandone in particolare lo stato pre e post tesi. Verrà cioè descritta l'applicazione implementata insieme alla presentazione di un'architettura abilitante anche per gli scenari smart health. Infine, all'interno del capito 4 viene descritto più precisamente il concetto di smart health e il percorso che ha condotto alla sua definizione.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing usage of wireless networks creates new challenges for wireless access providers. On the one hand, providers want to satisfy the user demands but on the other hand, they try to reduce the operational costs by decreasing the energy consumption. In this paper, we evaluate the trade-off between energy efficiency and quality of experience for a wireless mesh testbed. The results show that by intelligent service control, resources can be better utilized and energy can be saved by reducing the number of active network components. However, care has to be taken because the channel bandwidth varies in wireless networks. In the second part of the paper, we analyze the trade-off between energy efficiency and quality of experience at the end user. The results reveal that a provider's service control measures do not only reduce the operational costs of the network but also bring a second benefit: they help maximize the battery lifetime of the end-user device.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rapid Manufacturing (RM) wurde als Schlagwort in der letzten Zeit insbesondere aus dem Bereich des Selective Laser Sintering (SLS) bekannt. In dieser inzwischen über 15-jährigen Technologieentwicklung wurden in den vergangenen Jahren bedeutende Fortschritte erzielt, die die Bauteileigenschaften nahe an die Anforderungen für End-Teile heran brachten. So ist das RM denn auch weniger aus der Sicht grösserer Losgrösse zu verstehen. Viel mehr bedeutet Rapid Manufacturing, dass die Bauteile nach einer generativen Fertigung direkt im Endprodukt resp. der Endanwendung zum Einsatz kommt. Das Selective Laser Melting, mit welchem aus metallischen Pulvermaterialien direkt Metallteile in Standardmaterialien hergestellt werden können, ist aufgrund der guten Materialeigenschaften für RM prädestiniert. In den ersten Anwendungsfeldern des SLM–Verfahrens standen die Herstellung von Werkzeugeinsätzen mit konturnaher Kühlung (Conformal Cooling) im Vordergrund, wobei diese Werkzeuge unter dem Begriff RM verstanden werden müssen, da die Werkzeuge direkt für die Endanwendung - den Spritzgussprozess - verwendet werden. Aktuelle Trends gehen jedoch in Richtung der Fertigung von Funktionsteilen z.B. für den Maschinenbau. Obwohl sich in der Fertigung komplexer Funktionsteile noch Probleme, z.B. mit in Bezug auf die generative Baurichtung überhängender Bauteilstrukturen ergeben, zeigen sich trotzdem erhebliche Vorteile eines RM mittels SLM. Neben klaren Vorteilen durch das mögliche Customizing von Bauteilen können bei kleineren Bauteilgrössen auch erhebliche Kostenvorteile erzielt werden. Allerdings zeigen die Grenzen der aktuellen Möglichkeiten, in welchen Bereichen das SLM-Verfahren weiterer Entwicklung bedarf. Themen wie Produktivität, die Problematik der nach wie vor notwendigen Supportstrukturen wie auch Qualitätssicherung müssen in den nächsten Jahren angegangen werden, wenn dieses Verfahren den Schritt hin zu einem etablierten Produktionsverfahren und damit zu breiterer Akzeptanz und Anwendung finden soll

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper discusses new business models of transmission of television programs in the context of definitions of broadcasting and retransmission. Typically the whole process of supplying content to the end user has two stages: a media service provider supplies a signal assigned to a given TV channel to the cable operators and satellite DTH platform operators (dedicated transmission), and cable operators and satellite DTH platform operators transmit this signal to end users. In each stage the signals are encoded and are not available for the general public without the intervention of cable/platform operators. The services relating to the supply and transmission of the content are operated by different business entities: each earns money separately and each uses the content protected by copyright. We should determine how to define the actions of the entity supplying the signal with the TV program directly to the cable/digital platform operator and the actions of the entity providing the end user with the signal. The author criticizes the approach presented in the Chellomedia and Norma rulings, arguing that they lead to a significant level of legal uncertainty, and poses the basic questions concerning the notion of “public” in copyright.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The competitive industrial context compels companies to speed-up every new product design. In order to keep designing products that meet the needs of the end user, a human centered concurrent product design methodology has been proposed. Its setting up is complicated by the difficulties of collaboration between experts involved inthe design process. In order to ease this collaboration, we propose the use of virtual reality as an intermediate design representation in the form of light and specialized immersive convergence support applications. In this paper, we present the As Soon As Possible (ASAP) methodology making possible the development of these tools while ensuring their usefulness and usability. The relevance oft his approach is validated by an industrial use case through the design of an ergonomic-style convergence support tool.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Confidence intervals (CIs) are integral to the interpretation of the precision and clinical relevance of research findings. The aim of this study was to ascertain the frequency of reporting of CIs in leading prosthodontic and dental implantology journals and to explore possible factors associated with improved reporting. MATERIALS AND METHODS Thirty issues of nine journals in prosthodontics and implant dentistry were accessed, covering the years 2005 to 2012: The Journal of Prosthetic Dentistry, Journal of Oral Rehabilitation, The International Journal of Prosthodontics, The International Journal of Periodontics & Restorative Dentistry, Clinical Oral Implants Research, Clinical Implant Dentistry and Related Research, The International Journal of Oral & Maxillofacial Implants, Implant Dentistry, and Journal of Dentistry. Articles were screened and the reporting of CIs and P values recorded. Other information including study design, region of authorship, involvement of methodologists, and ethical approval was also obtained. Univariable and multivariable logistic regression was used to identify characteristics associated with reporting of CIs. RESULTS Interrater agreement for the data extraction performed was excellent (kappa = 0.88; 95% CI: 0.87 to 0.89). CI reporting was limited, with mean reporting across journals of 14%. CI reporting was associated with journal type, study design, and involvement of a methodologist or statistician. CONCLUSIONS Reporting of CI in implant dentistry and prosthodontic journals requires improvement. Improved reporting will aid appraisal of the clinical relevance of research findings by providing a range of values within which the effect size lies, thus giving the end user the opportunity to interpret the results in relation to clinical practice.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new tool for large-area photo-mosaicking (LAPM tool). This tool was developed specifically for the purpose of underwater mosaicking, and it is aimed at providing end-user scientists with an easy and robust way to construct large photo-mosaics from any set of images. It is notably capable of constructing mosaics with an unlimited number of images on any modern computer (minimum 1.30 GHz, 2 GB RAM). The mosaicking process can rely on both feature matching and navigation data. This is complemented by an intuitive graphical user interface, which gives the user the ability to select feature matches between any pair of overlapping images. Finally, mosaic files are given geographic attributes that permit direct import into ArcGIS. So far, the LAPM tool has been successfully used to construct geo-referenced photo-mosaics with photo and video material from several scientific cruises. The largest photo-mosaic contained more than 5000 images for a total area of about 105,000 m**2. This is the first article to present and to provide a finished and functional program to construct large geo-referenced photo-mosaics of the seafloor using feature detection and matching techniques. It also presents concrete examples of photo-mosaics produced with the LAPM tool.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 2005, the International Ocean Colour Coordinating Group (IOCCG) convened a working group to examine the state of the art in ocean colour data merging, which showed that the research techniques had matured sufficiently for creating long multi-sensor datasets (IOCCG, 2007). As a result, ESA initiated and funded the DUE GlobColour project (http://www.globcolour.info/) to develop a satellite based ocean colour data set to support global carbon-cycle research. It aims to satisfy the scientific requirement for a long (10+ year) time-series of consistently calibrated global ocean colour information with the best possible spatial coverage. This has been achieved by merging data from the three most capable sensors: SeaWiFS on GeoEye's Orbview-2 mission, MODIS on NASA's Aqua mission and MERIS on ESA's ENVISAT mission. In setting up the GlobColour project, three user organisations were invited to help. Their roles are to specify the detailed user requirements, act as a channel to the broader end user community and to provide feedback and assessment of the results. The International Ocean Carbon Coordination Project (IOCCP) based at UNESCO in Paris provides direct access to the carbon cycle modelling community's requirements and to the modellers themselves who will use the final products. The UK Met Office's National Centre for Ocean Forecasting (NCOF) in Exeter, UK, provides an understanding of the requirements of oceanography users, and the IOCCG bring their understanding of the global user needs and valuable advice on best practice within the ocean colour science community. The three year project kicked-off in November 2005 under the leadership of ACRI-ST (France). The first year was a feasibility demonstration phase that was successfully concluded at a user consultation workshop organised by the Laboratoire d'Océanographie de Villefranche, France, in December 2006. Error statistics and inter-sensor biases were quantified by comparison with insitu measurements from moored optical buoys and ship based campaigns, and used as an input to the merging. The second year was dedicated to the production of the time series. In total, more than 25 Tb of input (level 2) data have been ingested and 14 Tb of intermediate and output products created, with 4 Tb of data distributed to the user community. Quality control (QC) is provided through the Diagnostic Data Sets (DDS), which are extracted sub-areas covering locations of in-situ data collection or interesting oceanographic phenomena. This Full Product Set (FPS) covers global daily merged ocean colour products in the time period 1997-2006 and is also freely available for use by the worldwide science community at http://www.globcolour.info/data_access_full_prod_set.html. The GlobColour service distributes global daily, 8-day and monthly data sets at 4.6 km resolution for, chlorophyll-a concentration, normalised water-leaving radiances (412, 443, 490, 510, 531, 555 and 620 nm, 670, 681 and 709 nm), diffuse attenuation coefficient, coloured dissolved and detrital organic materials, total suspended matter or particulate backscattering coefficient, turbidity index, cloud fraction and quality indicators. Error statistics from the initial sensor characterisation are used as an input to the merging methods and propagate through the merging process to provide error estimates for the output merged products. These error estimates are a key component of GlobColour as they are invaluable to the users; particularly the modellers who need them in order to assimilate the ocean colour data into ocean simulations. An intensive phase of validation has been undertaken to assess the quality of the data set. In addition, inter-comparisons between the different merged datasets will help in further refining the techniques used. Both the final products and the quality assessment were presented at a second user consultation in Oslo on 20-22 November 2007 organised by the Norwegian Institute for Water Research (NIVA); presentations are available on the GlobColour WWW site. On request of the ESA Technical Officer for the GlobColour project, the FPS data set was mirrored in the PANGAEA data library.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose The purpose of this paper is to present what kind of elements and evaluation methods should be included into a framework for evaluating the achievements and impacts of transport projects supported in EC Framework Programmes (FP). Further, the paper discusses the possibilities of such an evaluation framework in producing recommendations regarding future transport research and policy objectives as well as mutual learning for the basis of strategic long term planning. Methods The paper describes the two-dimensional evaluation methodology developed in the course of the FP7 METRONOME project. The dimensions are: (1) achievement of project objectives and targets in different levels and (2) research project impacts according to four impact groups. The methodology uses four complementary approaches in evaluation, namely evaluation matrices, coordinator questionnaires, lead user interviews and workshops. Results Based on the methodology testing, with a sample of FP5 and FP6 projects, the main results relating to the rationale, implementation and achievements of FP projects is presented. In general, achievement of objectives in both FPs was good. Strongest impacts were identified within the impact group of management and co-ordination. Also scientific and end-user impacts of the projects were adequate, but wider societal impacts quite modest. The paper concludes with a discussion both on the theoretical and practical implications of the proposed methodology and by presenting some relevant future research needs.