915 resultados para active data-centric


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Personal communication devices are increasingly equipped with sensors that are able to collect and locally store information from their environs. The mobility of users carrying such devices, and hence the mobility of sensor readings in space and time, opens new horizons for interesting applications. In particular, we envision a system in which the collective sensing, storage and communication resources, and mobility of these devices could be leveraged to query the state of (possibly remote) neighborhoods. Such queries would have spatio-temporal constraints which must be met for the query answers to be useful. Using a simplified mobility model, we analytically quantify the benefits from cooperation (in terms of the system's ability to satisfy spatio-temporal constraints), which we show to go beyond simple space-time tradeoffs. In managing the limited storage resources of such cooperative systems, the goal should be to minimize the number of unsatisfiable spatio-temporal constraints. We show that Data Centric Storage (DCS), or "directed placement", is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, "amorphous placement", in which sensory samples are cached locally, and shuffling of cached samples is used to diffuse the sensory data throughout the whole network. We evaluate conditions under which directed versus amorphous placement strategies would be more efficient. These results lead us to propose a hybrid placement strategy, in which the spatio-temporal constraints associated with a sensory data type determine the most appropriate placement strategy for that data type. We perform an extensive simulation study to evaluate the performance of directed, amorphous, and hybrid placement protocols when applied to queries that are subject to timing constraints. Our results show that, directed placement is better for queries with moderately tight deadlines, whereas amorphous placement is better for queries with looser deadlines, and that under most operational conditions, the hybrid technique gives the best compromise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diese Dissertation stellt das neu entwickelte SystemRelAndXML vor, das für das Management und dieSpeicherung von hypertextzentrierten XML-Dokumenten und dendazugehörenden XSL-Stylesheets spezialisiert ist. DerAnwendungsbereich sind die Vorlesungsmaterialien anUniversitäten. RelAndXML speichert die XML-formatiertenÜbungsblätter in Textbausteinen und weiterenTeilen in einer speziellen Datenbank.Die Speicherung von XML-Dokumenten in Datenbanken ist seiteinigen Jahren ein wichtiges Thema der Datenbankforschung.Ansätze dafür gliedern sich in solche fürdatenzentrierte und andere für dokumentenzentrierteDokumente. Die Dissertation präsentiert einen Ansatzzur Speicherung von hypertextzentrierten XML-Dokumenten, derAspekte von datenzentrierten und dokumentenzentriertenAnsätzen kombiniert. Der Ansatz erlaubt dieWiederverwendung von Textbausteinen und speichert dieReihenfolge dort, wo sie wichtig ist. Mit RelAndXML könnennicht nur Elemente gespeichert werden, wie mit einigenanderen Ansätzen, sondern auch Attribute, Kommentareund Processing Instructions. Algorithmen für dieFragmentierung und Rekonstruktion von Dokumenten werdenbereit gestellt.RelAndXML wurde mit Java und unter Verwendung einerobjektrelationalen Datenbank implementiert. Das System hateine graphische Benutzungsoberfläche, die das Erstellenund Verändern der XML- und XSL-Dokumente, dasEinfügen von neuen oder schon gespeichertenTextbausteinen sowie das Erzeugen von HTML-Dokumenten zurVeröffentlichung ermöglicht.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’elaborato ha lo scopo di presentare le nuove opportunità di business offerte dal Web. Il rivoluzionario cambiamento che la pervasività della Rete e tutte le attività correlate stanno portando, ha posto le aziende davanti ad un diverso modo di relazionarsi con i propri consumatori, che sono sempre più informati, consapevoli ed esigenti, e con la concorrenza. La sfida da accettare per rimanere competitivi sul mercato è significativa e il mutamento in rapido sviluppo: gli aspetti che contraddistinguono questo nuovo paradigma digitale sono, infatti, velocità, mutevolezza, ma al tempo stesso misurabilità, ponderabilità, previsione. Grazie agli strumenti tecnologici a disposizione e alle dinamiche proprie dei diversi spazi web (siti, social network, blog, forum) è possibile tracciare più facilmente, rispetto al passato, l’impatto di iniziative, lanci di prodotto, promozioni e pubblicità, misurandone il ritorno sull’investimento, oltre che la percezione dell’utente finale. Un approccio datacentrico al marketing, attraverso analisi di monitoraggio della rete, permette quindi al brand investimenti più mirati e ponderati sulla base di stime e previsioni. Tra le più significative strategie di marketing digitale sono citate: social advertising, keyword advertising, digital PR, social media, email marketing e molte altre. Sono riportate anche due case history: una come ottimo esempio di co-creation in cui il brand ha coinvolto direttamente il pubblico nel processo di produzione del prodotto, affidando ai fan della Pagina Facebook ufficiale la scelta dei gusti degli yogurt da mettere in vendita. La seconda, caso internazionale di lead generation, ha permesso al brand di misurare la conversione dei visitatori del sito (previa compilazione di popin) in reali acquirenti, collegando i dati di traffico del sito a quelli delle vendite. Esempio di come online e offline comunichino strettamente.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Statistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work. Objective: To solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared. Methods: The VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity. Results: To illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors. Conclusions: The VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

RESTful services gained a lot of attention recently, even in the enterprise world, which is traditionally more web-service centric. Data centric RESfFul services, as previously mainly known in web environments, established themselves as a second paradigm complementing functional WSDL-based SOA. In the Internet of Things, and in particular when talking about sensor motes, the Constraint Application Protocol (CoAP) is currently in the focus of both research and industry. In the enterprise world a protocol called OData (Open Data Protocol) is becoming the future RESTful data access standard. To integrate sensor motes seamlessly into enterprise networks, an embedded OData implementation on top of CoAP is desirable, not requiring an intermediary gateway device. In this paper we introduce and evaluate an embedded OData implementation. We evaluate the OData protocol in terms of performance and energy consumption, considering different data encodings, and compare it to a pure CoAP implementation. We were able to demonstrate that the additional resources needed for an OData/JSON implementation are reasonable when aiming for enterprise interoperability, where OData is suggested to solve both the semantic and technical interoperability problems we have today when connecting systems

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The stratigraphic architecture of deep sea depositional systems has been discussed in detail. Some examples in Ischia and Stromboli volcanic islands (Southern Tyrrhenian sea, Italy) are here shown and discussed. The submarine slope and base of slope depositional systems represent a major component of marine and lacustrine basin fills, constituting primary targets for hydrocarbon exploration and development. The slope systems are characterized by seven seismic facies building blocks, including the turbiditic channel fills, the turbidite lobes, the sheet turbidites, the slide, slump and debris flow sheets, lobes and tongues, the fine-grained turbidite fills and sheets, the contourite drifts and finally, the hemipelagic drapes and fills. Sparker profiles offshore Ischia are presented. New seismo-stratigraphic evidence on buried volcanic structures and overlying Quaternary deposits of the eastern offshore of the Ischia Island are here discussed to highlight the implications on marine geophysics and volcanology. Regional seismic sections in the Ischia offshore across buried volcanic structures and debris avalanche and debris flow deposits are here presented and discussed. Deep sea depositional systems in the Ischia Island are well developed in correspondence to the Southern Ischia canyon system. The canyon system engraves a narrow continental shelf from Punta Imperatore to Punta San Pancrazio, being limited southwestwards from the relict volcanic edifice of the Ischia bank. While the eastern boundary of the canyon system is controlled by extensional tectonics, being limited from a NE-SW trending (counter-Apenninic) normal fault, its western boundary is controlled by volcanism, due to the growth of the Ischia volcanic bank. Submarine gravitational instabilities also acted in relationships to the canyon system, allowing for the individuation of large scale creeping at the sea bottom and hummocky deposits already interpreted as debris avalanche deposits. High resolution seismic data (Subbottom Chirp) coupled to high resolution Multibeam bathymetry collected in the frame of the Stromboli geophysical experiment aimed at recording seismic active data and tomography of the Stromboli Island are here presented. A new detailed swath bathymetry of Stromboli Island is here shown and discussed to reconstruct an up-to-date morpho-bathymetry and marine geology of the area, compared to volcanologic setting of the Aeolian volcanic complex. The Stromboli DEM gives information about the submerged structure of the volcano, particularly about the volcano-tectonic and gravitational processes involving the submarine flanks of the edifice. Several seismic units have been identified around the volcanic edifice and interpreted as volcanic acoustic basement pertaining to the volcano and overlying slide chaotic bodies emplaced during its complex volcano-tectonic evolution. They are related to the eruptive activity of Stromboli, mainly poliphasic and to regional geological processes involving the geology of the Aeolian Arc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis opens up the design space for awareness research in CSCW and HCI. By challenging the prevalent understanding of roles in awareness processes and exploring different mechanisms for actively engaging users in the awareness process, this thesis provides a better understanding of the complexity of these processes and suggests practical solutions for designing and implementing systems that support active awareness. Mutual awareness, a prominent research topic in the fields of Computer-Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI) refers to a fundamental aspect of a person’s work: their ability to gain a better understanding of a situation by perceiving and interpreting their co-workers actions. Technologically-mediated awareness, used to support co-workers across distributed settings, distinguishes between the roles of the actor, whose actions are often limited to being the target of an automated data gathering processes, and the receiver, who wants to be made aware of the actors’ actions. This receiver-centric view of awareness, focusing on helping receivers to deal with complex sets of awareness information, stands in stark contrast to our understanding of awareness as social process involving complex interactions between both actors and receivers. It fails to take into account an actors’ intimate understanding of their own activities and the contribution that this subjective understanding could make in providing richer awareness information. In this thesis I challenge the prevalent receiver-centric notion of awareness, and explore the conceptual foundations, design, implementation and evaluation of an alternative active awareness approach by making the following five contributions. Firstly, I identify the limitations of existing awareness research and solicit further evidence to support the notion of active awareness. I analyse ethnographic workplace studies that demonstrate how actors engage in an intricate interplay involving the monitoring of their co-workers progress and displaying aspects of their activities that may be of relevance to others. The examination of a large body of awareness research reveals that while disclosing information is a common practice in face-to-face collaborative settings it has been neglected in implementations of technically mediated awareness. Based on these considerations, I introduce the notion of intentional disclosure to describe the action of users actively and deliberately contributing awareness information. I consider challenges and potential solutions for the design of active awareness. I compare a range of systems, each allowing users to share information about their activities at various levels of detail. I discuss one of the main challenges to active awareness: that disclosing information about activities requires some degree of effort. I discuss various representations of effort in collaborative work. These considerations reveal that there is a trade-off between the richness of awareness information and the effort required to provide this information. I propose a framework for active awareness, aimed to help designers to understand the scope and limitations of different types of intentional disclosure. I draw on the identified richness/effort trade-off to develop two types of intentional disclosure, both of which aim to facilitate the disclosure of information while reducing the effort required to do so. For both of these approaches, direct and indirect disclosure, I delineate how they differ from related approaches and define a set of design criteria that is intended to guide their implementation. I demonstrate how the framework of active awareness can be practically applied by building two proof-of-concept prototypes that implement direct and indirect disclosure respectively. AnyBiff, implementing direct disclosure, allows users to create, share and use shared representations of activities in order to express their current actions and intentions. SphereX, implementing indirect disclosure, represents shared areas of interests or working context, and links sets of activities to these representations. Lastly, I present the results of the qualitative evaluation of the two prototypes and analyse the results with regard to the extent to which they implemented their respective disclosure mechanisms and supported active awareness. Both systems were deployed and tested in real world environments. The results for AnyBiff showed that users developed a wide range of activity representations, some unanticipated, and actively used the system to disclose information. The results further highlighted a number of design considerations relating to the relationship between awareness and communication, and the role of ambiguity. The evaluation of SphereX validated the feasibility of the indirect disclosure approach. However, the study highlighted the challenges of implementing cross-application awareness support and translating the concept to users. The study resulted in design recommendations aimed to improve the implementation of future systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Lasers play an important role for medical, sensoric and data storage devices. This thesis is focused on design, technology development, fabrication and characterization of hybrid ultraviolet Vertical-Cavity Surface-Emitting Lasers (UV VCSEL) with organic laser-active material and inorganic distributed Bragg reflectors (DBR). Multilayer structures with different layer thicknesses, refractive indices and absorption coefficients of the inorganic materials were studied using theoretical model calculations. During the simulations the structure parameters such as materials and thicknesses have been varied. This procedure was repeated several times during the design optimization process including also the feedback from technology and characterization. Two types of VCSEL devices were investigated. The first is an index coupled structure consisting of bottom and top DBR dielectric mirrors. In the space in between them is the cavity, which includes active region and defines the spectral gain profile. In this configuration the maximum electrical field is concentrated in the cavity and can destroy the chemical structure of the active material. The second type of laser is a so called complex coupled VCSEL. In this structure the active material is placed not only in the cavity but also in parts of the DBR structure. The simulations show that such a distribution of the active material reduces the required pumping power for reaching lasing threshold. High efficiency is achieved by substituting the dielectric material with high refractive index for the periods closer to the cavity. The inorganic materials for the DBR mirrors have been deposited by Plasma- Enhanced Chemical Vapor Deposition (PECVD) and Dual Ion Beam Sputtering (DIBS) machines. Extended optimizations of the technological processes have been performed. All the processes are carried out in a clean room Class 1 and Class 10000. The optical properties and the thicknesses of the layers are measured in-situ by spectroscopic ellipsometry and spectroscopic reflectometry. The surface roughness is analyzed by atomic force microscopy (AFM) and images of the devices are taken with scanning electron microscope (SEM). The silicon dioxide (SiO2) and silicon nitride (Si3N4) layers deposited by the PECVD machine show defects of the material structure and have higher absorption in the ultra violet range compared to ion beam deposition (IBD). This results in low reflectivity of the DBR mirrors and also reduces the optical properties of the VCSEL devices. However PECVD has the advantage that the stress in the layers can be tuned and compensated, in contrast to IBD at the moment. A sputtering machine Ionsys 1000 produced by Roth&Rau company, is used for the deposition of silicon dioxide (SiO2), silicon nitride (Si3N4), aluminum oxide (Al2O3) and zirconium dioxide (ZrO2). The chamber is equipped with main (sputter) and assisted ion sources. The dielectric materials were optimized by introducing additional oxygen and nitrogen into the chamber. DBR mirrors with different material combinations were deposited. The measured optical properties of the fabricated multilayer structures show an excellent agreement with the results of theoretical model calculations. The layers deposited by puttering show high compressive stress. As an active region a novel organic material with spiro-linked molecules is used. Two different materials have been evaporated by utilizing a dye evaporation machine in the clean room of the department Makromolekulare Chemie und Molekulare Materialien (mmCmm). The Spiro-Octopus-1 organic material has a maximum emission at the wavelength λemission = 395 nm and the Spiro-Pphenal has a maximum emission at the wavelength λemission = 418 nm. Both of them have high refractive index and can be combined with low refractive index materials like silicon dioxide (SiO2). The sputtering method shows excellent optical quality of the deposited materials and high reflection of the multilayer structures. The bottom DBR mirrors for all VCSEL devices were deposited by the DIBS machine, whereas the top DBR mirror deposited either by PECVD or by combination of PECVD and DIBS. The fabricated VCSEL structures were optically pumped by nitrogen laser at wavelength λpumping = 337 nm. The emission was measured by spectrometer. A radiation of the VCSEL structure at wavelength 392 nm and 420 nm is observed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Since 1999, the National Commission for the Knowledge and Use of the Biodiversity (CONABIO) in Mexico has been developing and managing the “Operational program for the detection of hot-spots using remote sensing techniques”. This program uses images from the MODerate resolution Imaging Spectroradiometer (MODIS) onboard the Terra and Aqua satellites and from the Advanced Very High Resolution Radiometer of the National Oceanic and Atmospheric Administration (NOAA-AVHRR), which are operationally received through the Direct Readout station (DR) at CONABIO. This allows the near-real time monitoring of fire events in Mexico and Central America. In addition to the detection of active fires, the location of hot spots are classified with respect to vegetation types, accessibility, and risk to Nature Protection Areas (NPA). Besides the fast detection of fires, further analysis is necessary due to the considerable effects of forest fires on biodiversity and human life. This fire impact assessment is crucial to support the needs of resource managers and policy makers for adequate fire recovery and restoration actions. CONABIO attempts to meet these requirements, providing post-fire assessment products as part of the management system in particular for satellite-based burnt area mapping. This paper provides an overview of the main components of the operational system and will present an outlook to future activities and system improvements, especially the development of a burnt area product. A special focus will also be placed on the fire occurrence within NPAs of Mexico

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.