968 resultados para anonimato rete privacy deep web onion routing cookie
Resumo:
Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous many-to-many information dissemination. Event systems are widely used, because asynchronous messaging provides a flexible alternative to RPC (Remote Procedure Call). They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. The filters are organized into a routing table in order to forward incoming events to proper subscribers and neighbouring routers. This thesis addresses the optimization of content-based routing tables organized using the covering relation and presents novel data structures and configurations for improving local and distributed operation. Data structures are needed for organizing filters into a routing table that supports efficient matching and runtime operation. We present novel results on dynamic filter merging and the integration of filter merging with content-based routing tables. In addition, the thesis examines the cost of client mobility using different protocols and routing topologies. We also present a new matching technique called temporal subspace matching. The technique combines two new features. The first feature, temporal operation, supports notifications, or content profiles, that persist in time. The second feature, subspace matching, allows more expressive semantics, because notifications may contain intervals and be defined as subspaces of the content space. We also present an application of temporal subspace matching pertaining to metadata-based continuous collection and object tracking.
Resumo:
Ubiquitous computing is about making computers and computerized artefacts a pervasive part of our everyday lifes, bringing more and more activities into the realm of information. The computationalization, informationalization of everyday activities increases not only our reach, efficiency and capabilities but also the amount and kinds of data gathered about us and our activities. In this thesis, I explore how information systems can be constructed so that they handle this personal data in a reasonable manner. The thesis provides two kinds of results: on one hand, tools and methods for both the construction as well as the evaluation of ubiquitous and mobile systems---on the other hand an evaluation of the privacy aspects of a ubiquitous social awareness system. The work emphasises real-world experiments as the most important way to study privacy. Additionally, the state of current information systems as regards data protection is studied. The tools and methods in this thesis consist of three distinct contributions. An algorithm for locationing in cellular networks is proposed that does not require the location information to be revealed beyond the user's terminal. A prototyping platform for the creation of context-aware ubiquitous applications called ContextPhone is described and released as open source. Finally, a set of methodological findings for the use of smartphones in social scientific field research is reported. A central contribution of this thesis are the pragmatic tools that allow other researchers to carry out experiments. The evaluation of the ubiquitous social awareness application ContextContacts covers both the usage of the system in general as well as an analysis of privacy implications. The usage of the system is analyzed in the light of how users make inferences of others based on real-time contextual cues mediated by the system, based on several long-term field studies. The analysis of privacy implications draws together the social psychological theory of self-presentation and research in privacy for ubiquitous computing, deriving a set of design guidelines for such systems. The main findings from these studies can be summarized as follows: The fact that ubiquitous computing systems gather more data about users can be used to not only study the use of such systems in an effort to create better systems but in general to study phenomena previously unstudied, such as the dynamic change of social networks. Systems that let people create new ways of presenting themselves to others can be fun for the users---but the self-presentation requires several thoughtful design decisions that allow the manipulation of the image mediated by the system. Finally, the growing amount of computational resources available to the users can be used to allow them to use the data themselves, rather than just being passive subjects of data gathering.
Resumo:
The publish/subscribe paradigm has lately received much attention. In publish/subscribe systems, a specialized event-based middleware delivers notifications of events created by producers (publishers) to consumers (subscribers) interested in that particular event. It is considered a good approach for implementing Internet-wide distributed systems as it provides full decoupling of the communicating parties in time, space and synchronization. One flavor of the paradigm is content-based publish/subscribe which allows the subscribers to express their interests very accurately. In order to implement a content-based publish/subscribe middleware in way suitable for Internet scale, its underlying architecture must be organized as a peer-to-peer network of content-based routers that take care of forwarding the event notifications to all interested subscribers. A communication infrastructure that provides such service is called a content-based network. A content-based network is an application-level overlay network. Unfortunately, the expressiveness of the content-based interaction scheme comes with a price - compiling and maintaining the content-based forwarding and routing tables is very expensive when the amount of nodes in the network is large. The routing tables are usually partially-ordered set (poset) -based data structures. In this work, we present an algorithm that aims to improve scalability in content-based networks by reducing the workload of content-based routers by offloading some of their content routing cost to clients. We also provide experimental results of the performance of the algorithm. Additionally, we give an introduction to the publish/subscribe paradigm and content-based networking and discuss alternative ways of improving scalability in content-based networks. ACM Computing Classification System (CCS): C.2.4 [Computer-Communication Networks]: Distributed Systems - Distributed applications
Resumo:
The Printed Circuit Board (PCB) layout design is one of the most important and time consuming phases during equipment design process in all electronic industries. This paper is concerned with the development and implementation of a computer aided PCB design package. A set of programs which operate on a description of the circuit supplied by the user in the form of a data file and subsequently design the layout of a double-sided PCB has been developed. The algorithms used for the design of the PCB optimise the board area and the length of copper tracks used for the interconnections. The output of the package is the layout drawing of the PCB, drawn on a CALCOMP hard copy plotter and a Tektronix 4012 storage graphics display terminal. The routing density (the board area required for one component) achieved by this package is typically 0.8 sq. inch per IC. The package is implemented on a DEC 1090 system in Pascal and FORTRAN and SIGN(1) graphics package is used for display generation.
Resumo:
Two field experiments were established in central Queensland at Capella and Gindie to investigate the immediate and then residual benefit of deep placed (20 cm) nutrients in this opportunity cropping system. The field sites had factorial combinations of P (40 kg P/ha), K (200 kg K/ha) and S (40 kg S/ha) and all plots received 100 kg N/ha. No further K or S fertilizers were added during the experiment but some crops had starter P. The Capella site was sown to chickpea in 2012, wheat in 2013 and then chickpea in 2014. The Gindie site was sown to sorghum in 2011/12, chickpea in 2013 and sorghum in early 2015. There were responses to P alone in the first two crops at each site and there were K responses in half the six site years. In year 1 (a good year) both sites showed a 20% grain yield response to only to deep P. In year 2 (much drier) the effects of deep P were still evident at both sites and the effects of K were clearly evident at Gindie. There was a suggestion of an additive P+K effect at Capella and a 50% increase for P+K at Gindie. Year 3 was dry and chickpeas at Capella showed a larger response to P+K but the sorghum at Gindie only responded to deep K. These results indicate that responses to deep placed P and K are durable over an opportunity cropping system, and meeting both requirements is important to achieve yield responses.
Resumo:
Tutkielmassa käsitellään luottamuksenhallintaa web-palveluympäristössä. Dynaaminen toimintaympäristö asettaa vaatimuksia luottamuksenhallintajärjestelmälle, jota käytetään paitsi paikallisten pääsynhallintapäätösten tekemiseen, myös laajemman mittakaavan päätöksenteon tukena, useiden autonomisten toimijoiden muodostamien yhteisöjen hallinnassa. Tutkielma esittelee Trust Based on Evidence -projektissa kehitetyn luottamuksenhallintajärjestelmän tiedollisen ja toiminnallisen mallin, paikallisesta ja yhteisön näkökulmasta. Mallia selkeytetään web-palveluympäristöön sijoittuvan esimerkin avulla. Luottamuksen käsitteen rakentamiseksi esitellään myös eri osa-alueille sijoittuvia luottamuksen malleja ja luottamusta käyttäviä järjestelmiä. Avoimessa verkkoympäristössä palveluntarjoaja joutuu tasapainottelemaan kahden osin vastakkaisen tavoitteen välillä: toisaalta järjestelmän tulisi olla mahdollisimman avoin, jotta se houkuttelisi käyttäjiä, toisaalta liiallinen avoimuus kasvattaa tietomurron riskiä. Kompromissin löytäminen on hankaloitunut edelleen saavutettavien käyttäjien määrän kasvaessa ja tarjottavien palvelujen monimutkaistuessa. Tehtävä vaatii toisaalta erikoistapauksien käsittelyä, toisaalta yleistettävyyttä laajan käyttäjistön suhteen. Tietoturvan ylläpidon automatisointia ovat edistäneet muun muassa politiikkapäätösten erottaminen toteutuksesta ja mahdollisten tietomurron merkkien tarkkailun delegointi siihen erikoistuneille ohjelmille (IDS). Palvelujen käyttäjistön kasvaessa ja siirtyessä nimettömämmiksi kurinpito ja tarkkailu kuitenkin vaikeutuvat entisestään, eikä ylläpitäjiä riitä sidottavaksi jatkuvaan käyttäjien vahtimiseen. Monesti valvoja voikin vain poistaa käyttöoikeuden häiriköltä, jolloin esimerkiksi hieman lievemmälle sääntöjen ``venyttämiselle'' ei juuri voi tehdä mitään. Luottamuksenhallinta helpottaa rikkomuksiin ja toisaalta hyvään käytökseen reagoimista asteittain. Sen pohjalta käyttäjien valvontaan, pääsynhallintaan ja resurssien rajoitukseen liittyvä hienosäätö voidaan tuoda ymmärrettäväksi osaksi ylläpitoa ja pitkälti myös automatisoida. Avainsanat: luottamuksenhallinta, Web Services
Resumo:
Web-based technology is particularly well-suited to promoting active student involvement in the processes of learning. All students enrolled in a first-year educational psychology unit were required to complete ten weekly online quizzes, ten weekly student-generated questions and ten weekly student answers to those questions. Results of an online survey of participating students strongly support the viability and perceived benefits of such an instructional approach. Although students reported that the 30 assessments were useful and reasonable, the most common theme to emerge from the professional reflections of participating lecturers was that the marking of questions and answers was unmanageable.
Resumo:
Information and communication technology (ICT) has created opportunities for students' online interaction in higher education throughout the world. Limited research has been done in this area in Saudi Arabia. This study investigated university students' engagement and perceptions of online collaborative learning using Social Learning Tools (SLTs). In addition, it explored the quality of knowledge construction that occurred in this environment. A mixed methods case study approach was adopted, and the data was gathered from undergraduate students (n=43) who were enrolled in a 15-week course at a Saudi university. The results showed that while the students had positive perceptions towards SLTs and their engagement, data gathered from their work also showed little evidence of high levels of knowledge construction.
Resumo:
In this paper, we first describe a framework to model the sponsored search auction on the web as a mechanism design problem. Using this framework, we describe two well-known mechanisms for sponsored search auction-Generalized Second Price (GSP) and Vickrey-Clarke-Groves (VCG). We then derive a new mechanism for sponsored search auction which we call optimal (OPT) mechanism. The OPT mechanism maximizes the search engine's expected revenue, while achieving Bayesian incentive compatibility and individual rationality of the advertisers. We then undertake a detailed comparative study of the mechanisms GSP, VCG, and OPT. We compute and compare the expected revenue earned by the search engine under the three mechanisms when the advertisers are symmetric and some special conditions are satisfied. We also compare the three mechanisms in terms of incentive compatibility, individual rationality, and computational complexity. Note to Practitioners-The advertiser-supported web site is one of the successful business models in the emerging web landscape. When an Internet user enters a keyword (i.e., a search phrase) into a search engine, the user gets back a page with results, containing the links most relevant to the query and also sponsored links, (also called paid advertisement links). When a sponsored link is clicked, the user is directed to the corresponding advertiser's web page. The advertiser pays the search engine in some appropriate manner for sending the user to its web page. Against every search performed by any user on any keyword, the search engine faces the problem of matching a set of advertisers to the sponsored slots. In addition, the search engine also needs to decide on a price to be charged to each advertiser. Due to increasing demands for Internet advertising space, most search engines currently use auction mechanisms for this purpose. These are called sponsored search auctions. A significant percentage of the revenue of Internet giants such as Google, Yahoo!, MSN, etc., comes from sponsored search auctions. In this paper, we study two auction mechanisms, GSP and VCG, which are quite popular in the sponsored auction context, and pursue the objective of designing a mechanism that is superior to these two mechanisms. In particular, we propose a new mechanism which we call the OPT mechanism. This mechanism maximizes the search engine's expected revenue subject to achieving Bayesian incentive compatibility and individual rationality. Bayesian incentive compatibility guarantees that it is optimal for each advertiser to bid his/her true value provided that all other agents also bid their respective true values. Individual rationality ensures that the agents participate voluntarily in the auction since they are assured of gaining a non-negative payoff by doing so.
Resumo:
Natural history collections are an invaluable resource housing a wealth of knowledge with a long tradition of contributing to a wide range of fields such as taxonomy, quarantine, conservation and climate change. It is recognized however [Smith and Blagoderov 2012] that such physical collections are often heavily underutilized as a result of the practical issues of accessibility. The digitization of these collections is a step towards removing these access issues, but other hurdles must be addressed before we truly unlock the potential of this knowledge.
Resumo:
This study deals with algal species occurring commonly in the Baltic Sea: haptophyte Prymnesium parvum, dinoflagellates Dinophysis acuminata, D. norvegica and D. rotundata, and cyanobacterium Nodularia spumigena. The hypotheses are connected to the toxicity of the species, to the factors determining toxicity, to the consequences of toxicity and to the transfer of toxins in the aquatic food web. Since the Baltic Sea is severely eutrophicated, the fast-growing haptophytes have potential in causing toxic blooms. In our studies, the toxicity (as haemolytic activity) of the haptophyte P. parvum was highest under phosphorus-limited conditions, but the cells were toxic also under nitrogen limitation and under nutrient-balanced growth conditions. The cellular nutrient ratios were tightly related to the toxicity. The stoichiometric flexibility for cellular phosphorus quota was higher than for nitrogen, and nitrogen limitation led to decreased biomass. Negative allelopathic effects on another algae (Rhodomonas salina) could be observed already at low P. parvum cell densities, whereas immediate lysis of R. salina cells occurred at P. parvum cell densities corresponding to natural blooms. Release of dissolved organic carbon from the R. salina cells was measured within 30 minutes, and an increase in bacterial number and biomass was measured within 23 h. Because of the allelopathic effect, formation of a P. parvum bloom may accelerate after a critical cell density is reached and the competing species are eliminated. A P. parvum bloom indirectly stimulates bacterial growth, and alters the functioning of the planktonic food web by increasing the carbon transfer through the microbial loop. Our results were the first reports on DSP toxins in Dinophysis cells in the Gulf of Finland and on PTX-2 in the Baltic Sea. Cellular toxin contents in Dinophysis spp. ranged from 0.2 to 149 pg DTX-1 cell-1 and from 1.6 to 19.9 pg PTX-2 cell-1 in the Gulf of Finland. D. norvegica was found mainly around the thermocline (max. 200 cells L-1), whereas D. acuminata was found in the whole mixed layer (max. 7 280 cells L-1). Toxins in the sediment trap corresponded to 1 % of DTX-1 and 0.01 % PTX-2 of the DSP pool in the suspended matter. This indicates that the majority of the DSP toxins does not enter the benthic community, but is either decomposed in the water column, or transferred to higher trophic levels in the planktonic food chain. We found that nodularin, produced by Nodularia spumigena, was transferred to the copepod Eurytemora affinis through three pathways: by grazing on filaments of small Nodularia, directly from the dissolved pool, and through the microbial food web by copepods grazing on ciliates, dinoflagellates and heterotrophic nanoflagellates. The estimated proportion of the microbial food web in nodularin transfer was 22-45 % and 71-76 % in our two experiments, respectively. This highlights the potential role of the microbial food web in the transfer of toxins in the planktonic food web.