962 resultados para End-user Queries
Resumo:
Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.
Resumo:
Volatile amines are prominent indicators of food freshness, as they are produced during many microbiological food degradation processes. Monitoring and indicating the volatile amine concentration within the food package by intelligent packaging solutions might therefore be a simple yet powerful way to control food safety throughout the distribution chain.rnrnIn this context, this work aims to the formation of colourimetric amine sensing surfaces on different substrates, especially transparent PET packaging foil. The colour change of the deposited layers should ideally be discernible by the human eye to facilitate the determination by the end-user. rnrnDifferent tailored zinc(II) and chromium(III) metalloporphyrins have been used as chromophores for the colourimetric detection of volatile amines. A new concept to increase the porphyrins absorbance change upon exposure to amines is introduced. Moreover, the novel porphyrins’ processability during the deposition process is increased by their enhanced solubility in non-polar solvents.rnrnThe porphyrin chromophores have successfully been incorporated into polysiloxane matrices on different substrates via a dielectric barrier discharge enhanced chemical vapour deposition. This process allows the use of nitrogen as a cheap and abundant plasma gas, produces minor amounts of waste and by-products and can be easily introduced into (existing) roll-to-roll production lines. The formed hybrid sensing layers tightly incorporate the porphyrins and moreover form a porous structure to facilitate the amines diffusion to and interaction with the chromophores.rnrnThe work is completed with the thorough analysis of the porphyrins’ amine sensing performance in solution as well as in the hybrid coatings . To reveal the underlying interaction mechanisms, the experimental results are supported by DFT calculations. The deposited layers could be used for the detection of NEt3 concentrations below 10 ppm in the gas phase. Moreover, the coated foils have been tested in preliminary food storage experiments. rnrnThe mechanistic investigations on the interaction of amines with chromium(III) porphyrins revealed a novel pathway to the formation of chromium(IV) oxido porphyrins. This has been used for electrochemical epoxidation reactions with dioxygen as the formal terminal oxidant.rn
Resumo:
L'obiettivo della tesi è progettare un'architettura abilitante per scenari smart health, concentrandosi sulla parte end-user (non sulla parte server-cloud), ossia sperimentando l'ambito dei wearable devices e facendo riferimento al binomio fitness-Apple Watch, comodo perchè presente nell'azienda FitStadium che ci fornisce motivazioni, requisiti e goals. Nel primo capitolo si analizzeranno le soluzioni offerte attualmente dal mercato per la realizzazione di servizi legati al fitness, focalizzandosi in particolare sulle architetture proposte e come quest'ultime possano convivere con l'ecosistema FitStadium. Il secondo capitolo è riservato invece all'approfondimento delle tecnologie Apple, che verranno utilizzate concretamente per la realizzazione del caso di studio. Ancora una volta si farà attenzione alle possibilità architetturali offerte da queste tecnologie. Nel terzo capitolo viene trattato nella sua interezza il caso di studio, analizzandone in particolare lo stato pre e post tesi. Verrà cioè descritta l'applicazione implementata insieme alla presentazione di un'architettura abilitante anche per gli scenari smart health. Infine, all'interno del capito 4 viene descritto più precisamente il concetto di smart health e il percorso che ha condotto alla sua definizione.
Resumo:
The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.
Resumo:
The increasing usage of wireless networks creates new challenges for wireless access providers. On the one hand, providers want to satisfy the user demands but on the other hand, they try to reduce the operational costs by decreasing the energy consumption. In this paper, we evaluate the trade-off between energy efficiency and quality of experience for a wireless mesh testbed. The results show that by intelligent service control, resources can be better utilized and energy can be saved by reducing the number of active network components. However, care has to be taken because the channel bandwidth varies in wireless networks. In the second part of the paper, we analyze the trade-off between energy efficiency and quality of experience at the end user. The results reveal that a provider's service control measures do not only reduce the operational costs of the network but also bring a second benefit: they help maximize the battery lifetime of the end-user device.
Resumo:
Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
Rapid Manufacturing (RM) wurde als Schlagwort in der letzten Zeit insbesondere aus dem Bereich des Selective Laser Sintering (SLS) bekannt. In dieser inzwischen über 15-jährigen Technologieentwicklung wurden in den vergangenen Jahren bedeutende Fortschritte erzielt, die die Bauteileigenschaften nahe an die Anforderungen für End-Teile heran brachten. So ist das RM denn auch weniger aus der Sicht grösserer Losgrösse zu verstehen. Viel mehr bedeutet Rapid Manufacturing, dass die Bauteile nach einer generativen Fertigung direkt im Endprodukt resp. der Endanwendung zum Einsatz kommt. Das Selective Laser Melting, mit welchem aus metallischen Pulvermaterialien direkt Metallteile in Standardmaterialien hergestellt werden können, ist aufgrund der guten Materialeigenschaften für RM prädestiniert. In den ersten Anwendungsfeldern des SLM–Verfahrens standen die Herstellung von Werkzeugeinsätzen mit konturnaher Kühlung (Conformal Cooling) im Vordergrund, wobei diese Werkzeuge unter dem Begriff RM verstanden werden müssen, da die Werkzeuge direkt für die Endanwendung - den Spritzgussprozess - verwendet werden. Aktuelle Trends gehen jedoch in Richtung der Fertigung von Funktionsteilen z.B. für den Maschinenbau. Obwohl sich in der Fertigung komplexer Funktionsteile noch Probleme, z.B. mit in Bezug auf die generative Baurichtung überhängender Bauteilstrukturen ergeben, zeigen sich trotzdem erhebliche Vorteile eines RM mittels SLM. Neben klaren Vorteilen durch das mögliche Customizing von Bauteilen können bei kleineren Bauteilgrössen auch erhebliche Kostenvorteile erzielt werden. Allerdings zeigen die Grenzen der aktuellen Möglichkeiten, in welchen Bereichen das SLM-Verfahren weiterer Entwicklung bedarf. Themen wie Produktivität, die Problematik der nach wie vor notwendigen Supportstrukturen wie auch Qualitätssicherung müssen in den nächsten Jahren angegangen werden, wenn dieses Verfahren den Schritt hin zu einem etablierten Produktionsverfahren und damit zu breiterer Akzeptanz und Anwendung finden soll
Resumo:
The paper discusses new business models of transmission of television programs in the context of definitions of broadcasting and retransmission. Typically the whole process of supplying content to the end user has two stages: a media service provider supplies a signal assigned to a given TV channel to the cable operators and satellite DTH platform operators (dedicated transmission), and cable operators and satellite DTH platform operators transmit this signal to end users. In each stage the signals are encoded and are not available for the general public without the intervention of cable/platform operators. The services relating to the supply and transmission of the content are operated by different business entities: each earns money separately and each uses the content protected by copyright. We should determine how to define the actions of the entity supplying the signal with the TV program directly to the cable/digital platform operator and the actions of the entity providing the end user with the signal. The author criticizes the approach presented in the Chellomedia and Norma rulings, arguing that they lead to a significant level of legal uncertainty, and poses the basic questions concerning the notion of “public” in copyright.
Resumo:
The competitive industrial context compels companies to speed-up every new product design. In order to keep designing products that meet the needs of the end user, a human centered concurrent product design methodology has been proposed. Its setting up is complicated by the difficulties of collaboration between experts involved inthe design process. In order to ease this collaboration, we propose the use of virtual reality as an intermediate design representation in the form of light and specialized immersive convergence support applications. In this paper, we present the As Soon As Possible (ASAP) methodology making possible the development of these tools while ensuring their usefulness and usability. The relevance oft his approach is validated by an industrial use case through the design of an ergonomic-style convergence support tool.
Resumo:
This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.
Resumo:
The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.
Resumo:
PURPOSE Confidence intervals (CIs) are integral to the interpretation of the precision and clinical relevance of research findings. The aim of this study was to ascertain the frequency of reporting of CIs in leading prosthodontic and dental implantology journals and to explore possible factors associated with improved reporting. MATERIALS AND METHODS Thirty issues of nine journals in prosthodontics and implant dentistry were accessed, covering the years 2005 to 2012: The Journal of Prosthetic Dentistry, Journal of Oral Rehabilitation, The International Journal of Prosthodontics, The International Journal of Periodontics & Restorative Dentistry, Clinical Oral Implants Research, Clinical Implant Dentistry and Related Research, The International Journal of Oral & Maxillofacial Implants, Implant Dentistry, and Journal of Dentistry. Articles were screened and the reporting of CIs and P values recorded. Other information including study design, region of authorship, involvement of methodologists, and ethical approval was also obtained. Univariable and multivariable logistic regression was used to identify characteristics associated with reporting of CIs. RESULTS Interrater agreement for the data extraction performed was excellent (kappa = 0.88; 95% CI: 0.87 to 0.89). CI reporting was limited, with mean reporting across journals of 14%. CI reporting was associated with journal type, study design, and involvement of a methodologist or statistician. CONCLUSIONS Reporting of CI in implant dentistry and prosthodontic journals requires improvement. Improved reporting will aid appraisal of the clinical relevance of research findings by providing a range of values within which the effect size lies, thus giving the end user the opportunity to interpret the results in relation to clinical practice.
Resumo:
The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .
Resumo:
This paper presents a new tool for large-area photo-mosaicking (LAPM tool). This tool was developed specifically for the purpose of underwater mosaicking, and it is aimed at providing end-user scientists with an easy and robust way to construct large photo-mosaics from any set of images. It is notably capable of constructing mosaics with an unlimited number of images on any modern computer (minimum 1.30 GHz, 2 GB RAM). The mosaicking process can rely on both feature matching and navigation data. This is complemented by an intuitive graphical user interface, which gives the user the ability to select feature matches between any pair of overlapping images. Finally, mosaic files are given geographic attributes that permit direct import into ArcGIS. So far, the LAPM tool has been successfully used to construct geo-referenced photo-mosaics with photo and video material from several scientific cruises. The largest photo-mosaic contained more than 5000 images for a total area of about 105,000 m**2. This is the first article to present and to provide a finished and functional program to construct large geo-referenced photo-mosaics of the seafloor using feature detection and matching techniques. It also presents concrete examples of photo-mosaics produced with the LAPM tool.