962 resultados para ubiquitous multi-core framework
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but instead what amounts to its essence or 'DNA'. Current approaches show insufficient deployment of various types of knowledge that could be brought to bear in providing a fingerprinting framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Zones of Interest (ZoI) in an image or cross media artefact. The proposed framework aims to deliver selective composite fingerprinting that is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals and also the inevitably needed market intelligence knowledge such as customers' social networks interests profiling which we can deploy as a crucial component of our fingerprinting collateral knowledge.
Resumo:
Assessing the ways in which rural agrarian areas provide Cultural Ecosystem Services (CES) is proving difficult to achieve. This research has developed an innovative methodological approach named as Multi Scale Indicator Framework (MSIF) for capturing the CES embedded into the rural agrarian areas. This framework reconciles a literature review with a trans-disciplinary participatory workshop. Both of these sources reveal that societal preferences diverge upon judgemental criteria which in turn relate to different visual concepts that can be drawn from analysing attributes, elements, features and characteristics of rural areas. We contend that it is now possible to list a group of possible multi scale indicators for stewardship, diversity and aesthetics. These results might also be of use for improving any existing European indicators frameworks by also including CES. This research carries major implications for policy at different levels of governance, as it makes possible to target and monitor policy instruments to the physical rural settings so that cultural dimensions are adequately considered. There is still work to be developed on regional specific values and thresholds for each criteria and its indicator set. In practical terms, by developing the conceptual design within a common framework as described in this paper, a considerable step forward towards the inclusion of the cultural dimension in European wide assessments can be made.
Resumo:
Cover crops are sown to provide a number of ecosystem services including nutrient management, mitigation of diffuse pollution, improving soil structure and organic matter content, weed suppression, nitrogen fixation and provision of resources for biodiversity. Although the decision to sow a cover crop may be driven by a desire to achieve just one of these objectives, the diversity of cover crops species and mixtures available means that there is potential to combine a number of ecosystem services within the same crop and growing season. Designing multi-functional cover crops would potentially help to reconcile the often conflicting agronomic and environmental agendas and contribute to the optimal use of land. We present a framework for integrating multiple ecosystem services delivered by cover crops that aims to design a mixture of species with complementary growth habit and functionality. The optimal number and identity of species will depend on the services included in the analysis, the functional space represented by the available species pool and the community dynamics of the crop in terms of dominance and co-existence. Experience from a project that applied the framework to fertility building leys in organic systems demonstrated its potential and emphasised the importance of the initial choice of species to include in the analysis
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
An optimal control strategy for the highly active antiretroviral therapy associated to the acquired immunodeficiency syndrome should be designed regarding a comprehensive analysis of the drug chemotherapy behavior in the host tissues, from major viral replication sites to viral sanctuary compartments. Such approach is critical in order to efficiently explore synergistic, competitive and prohibitive relationships among drugs and, hence, therapy costs and side-effect minimization. In this paper, a novel mathematical model for HIV-1 drug chemotherapy dynamics in distinct host anatomic compartments is proposed and theoretically evaluated on fifteen conventional anti-retroviral drugs. Rather than interdependence between drug type and its concentration profile in a host tissue, simulated results suggest that such profile is importantly correlated with the host tissue under consideration. Furthermore, the drug accumulative dynamics are drastically affected by low patient compliance with pharmacotherapy, even when a single dose lacks. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.
Resumo:
Questo elaborato descrive in modo approfondito la natura e lo sviluppo di applicazioni mediante PhoneGap. Lo sviluppo web e lo sviluppo nativo oltre a una breve introduzione ad altri interessanti tools di sviluppo cross-platform
Resumo:
Negli ultimi anni si è imposto il concetto di Ubiquitous Computing, ovvero la possibilità di accedere al web e di usare applicazioni per divertimento o lavoro in qualsiasi momento e in qualsiasi luogo. Questo fenomeno sta cambiando notevolmente le abitudini delle persone e ciò è testimoniato anche dal fatto che il mercato mobile è in forte ascesa: da fine 2014 sono 45 milioni gli smartphone e 12 milioni i tablet in circolazione in Italia. Sembra quasi impossibile, dunque, rinunciare al mobile, soprattutto per le aziende: il nuovo modo di comunicare ha reso necessaria l’introduzione del Mobile Marketing e per raggiungere i propri clienti ora uno degli strumenti più efficaci e diretti sono le applicazioni. Esse si definiscono native se si pongono come traguardo un determinato smartphone e possono funzionare solo per quel sistema operativo. Infatti un’app costruita, per esempio, per Android non può funzionare su dispositivi Apple o Windows Phone a meno che non si ricorra al processo di porting. Ultimamente però è richiesto un numero sempre maggiore di app per piattaforma e i dispositivi presenti attualmente sul mercato presentano differenze tra le CPU, le interfacce (Application Programming Interface), i sistemi operativi, l’hardware, etc. Nasce quindi la necessità di creare applicazioni che possano funzionare su più sistemi operativi, ovvero le applicazioni platform-independent. Per facilitare e supportare questo genere di lavoro sono stati definiti nuovi ambienti di sviluppo tra i quali Sencha Touch e Apache Cordova. Il risultato finale dello sviluppo di un’app attraverso questi framework è proprio quello di ottenere un oggetto che possa essere eseguito su qualsiasi dispositivo. Naturalmente la resa non sarà la stessa di un’app nativa, la quale ha libero accesso a tutte le funzionalità del dispositivo (rubrica, messaggi, notifiche, geolocalizzazione, fotocamera, accelerometro, etc.), però con questa nuova app vi è la garanzia di un costo di sviluppo minore e di una richiesta considerevole sul mercato. L’obiettivo della tesi è quello di analizzare questo scenario attraverso un caso di studio proveniente da una realtà aziendale che presenta proprio la necessità di sviluppare un’applicazione per più piattaforme. Nella prima parte della tesi viene affrontata la tematica del mobile computing e quella del dualismo tra la programmazione nativa e le web app: verranno analizzate le caratteristiche delle due diverse tipologie cercando di capire quale delle due risulti essere la migliore. Nella seconda parte sarà data luce a uno dei più importanti framework per la costruzione di app multi-piattaforma: Sencha Touch. Ne verranno analizzate le caratteristiche, soffermandosi in particolare sul pattern MVC e si potrà vedere un confronto con altri framework. Nella terza parte si tratterà il caso di studio, un app mobile per Retail basata su Sencha Touch e Apache Cordova. Nella parte finale si troveranno alcune riflessioni e conclusioni sul mobile platform-independent e sui vantaggi e gli svantaggi dell’utilizzo di JavaScript per sviluppare app.
Resumo:
Negli ultimi anni le tecnologie informatiche sono state al centro di uno sviluppo esponenziale. Fra le incalcolabili innovazioni presentate, ha preso sempre più campo il paradigma per la programmazione ad agenti, che permette la realizzazione di sistemi software complessi, i quali, nell'informatica moderna, ricoprono un ruolo di fondamentale importanza. Questi sistemi, denominati autonomi, mostrano caratteristiche interessanti per scenari dinamici; essi infatti devono essere robusti e resistenti, in grado di adattarsi al contesto ambientale e quindi reagire a determinate modifiche che si verificano nell'ambiente, comportandosi di conseguenza. Indicano perciò la pro-attività dell'entità presa in considerazione. In questa tesi saranno spiegate queste tipologie di sistemi, introdotte le loro caratteristiche e mostrate le loro potenzialità. Tali caratteristiche permettono di responsabilizzare i soggetti, rendendo il sistema auto-organizzato, con una migliore scalabilità e modularità, riducendo quindi le elevate esigenze di calcolo. L'organizzazione di questo documento prevede i primi capitoli atti a introdurre il mondo dei sistemi autonomi, partendo dalle definizioni di autonomia e di agenti software, concludendo con i sistemi multi-agenti, allo scopo di permettere al lettore una comprensione adatta ed esaustiva. I successivi capitoli riguardano le fasi di progettazione delle entità prese in esame, le loro forme di standardizzazione e i modelli che possono adottare, tra i quali il più conosciuto, il modello BDI. Ne seguono due diverse metodologie per l'ingegneria del software orientata agli agenti. Si conclude con la presentazione dello stato dell'arte degli ambienti di sviluppo conosciuti, contenente un'esauriente introduzione ad ognuno di essi ed una visione nel mondo del lavoro del loro apporto negli applicativi in commercio. Infine la tesi terminerà con un capitolo di conclusioni e di riflessioni sui possibili aspetti futuri.
Resumo:
An accurate and coherent chronological framework is essential for the interpretation of climatic and environmental records obtained from deep polar ice cores. Until now, one common ice core age scale had been developed based on an inverse dating method (Datice), combining glaciological modelling with absolute and stratigraphic markers between 4 ice cores covering the last 50 ka (thousands of years before present) (Lemieux-Dudon et al., 2010). In this paper, together with the companion paper of Veres et al. (2013), we present an extension of this work back to 800 ka for the NGRIP, TALDICE, EDML, Vostok and EDC ice cores using an improved version of the Datice tool. The AICC2012 (Antarctic Ice Core Chronology 2012) chronology includes numerous new gas and ice stratigraphic links as well as improved evaluation of background and associated variance scenarios. This paper concentrates on the long timescales between 120–800 ka. In this framework, new measurements of δ18Oatm over Marine Isotope Stage (MIS) 11–12 on EDC and a complete δ18Oatm record of the TALDICE ice cores permit us to derive additional orbital gas age constraints. The coherency of the different orbitally deduced ages (from δ18Oatm, δO2/N2 and air content) has been verified before implementation in AICC2012. The new chronology is now independent of other archives and shows only small differences, most of the time within the original uncertainty range calculated by Datice, when compared with the previous ice core reference age scale EDC3, the Dome F chronology, or using a comparison between speleothems and methane. For instance, the largest deviation between AICC2012 and EDC3 (5.4 ka) is obtained around MIS 12. Despite significant modifications of the chronological constraints around MIS 5, now independent of speleothem records in AICC2012, the date of Termination II is very close to the EDC3 one.
Resumo:
Due to their outstanding resolution and well-constrained chronologies, Greenland ice-core records provide a master record of past climatic changes throughout the Last Interglacial–Glacial cycle in the North Atlantic region. As part of the INTIMATE (INTegration of Ice-core, MArine and TErrestrial records) project, protocols have been proposed to ensure consistent and robust correlation between different records of past climate. A key element of these protocols has been the formal definition and ordinal numbering of the sequence of Greenland Stadials (GS) and Greenland Interstadials (GI) within the most recent glacial period. The GS and GI periods are the Greenland expressions of the characteristic Dansgaard–Oeschger events that represent cold and warm phases of the North Atlantic region, respectively. We present here a more detailed and extended GS/GI template for the whole of the Last Glacial period. It is based on a synchronization of the NGRIP, GRIP, and GISP2 ice-core records that allows the parallel analysis of all three records on a common time scale. The boundaries of the GS and GI periods are defined based on a combination of stable-oxygen isotope ratios of the ice (δ18O, reflecting mainly local temperature) and calcium ion concentrations (reflecting mainly atmospheric dust loading) measured in the ice. The data not only resolve the well-known sequence of Dansgaard–Oeschger events that were first defined and numbered in the ice-core records more than two decades ago, but also better resolve a number of short-lived climatic oscillations, some defined here for the first time. Using this revised scheme, we propose a consistent approach for discriminating and naming all the significant abrupt climatic events of the Last Glacial period that are represented in the Greenland ice records. The final product constitutes an extended and better resolved Greenland stratotype sequence, against which other proxy records can be compared and correlated. It also provides a more secure basis for investigating the dynamics and fundamental causes of these climatic perturbations.