975 resultados para Collaborative content
Resumo:
Background: ‘Birth Satisfaction’ is a term that encompasses a woman’s evaluation of her birth experience. The term includes factors such as her appraisal of the quality of care she received, a personal assessment of how she coped, and her reconstructions of what happened on that particular day. Her accounts may be accurate or skewed, yet correspond with her reality of how events unfolded. Objective: To evaluate properties of an instrument designed to measure birth satisfaction in a Greek population of postnatal women. Study design: We assessed factor structure, internal consistency, divergent validity and known-groups discriminant validity of the 30-item Greek Birth Satisfaction Scale – Long Form (30-item G-BSS-LF) and its revised version the 10-item Greek-BSS-Revised (10-item-G-BSS-R), using survey data collected in Athens. Participants: A convenience sample of healthy Greek postnatal women (n = 162) aged 22–46 years who had delivered between 34 and 42 weeks’ gestation. Results: The 30-item-G-BSS-LF performed poorly in terms of factor structure. The short-form 10-item-G-BSS-R performed well in terms of measurement replication of the English equivalent version as a multidimensional instrument. The short-form 10-item-G-BSS-R comprises three subscales which measure distinct but correlated domains of: (1) quality of care provision (4 items), (2) women’s personal attributes (2 items), and (3) stress experienced during labour (4 items). Key conclusions: The 10-item-G-BSS-R is a valid and reliable multidimensional psychometric instrument for measuring birth satisfaction in Greek postnatal women.
Resumo:
The article considers the arguments that have been made in defence of social media screening as well as issues that arise and may effectively erode the reliability and utility of such data for employers. First, the authors consider existing legal frameworks and guidelines that exist in the UK and the USA, as well as the subsequent ethical concerns that arise when employers access and use social networking content for employment purposes. Second, several arguments in favour of the use of social networking content are made, each of which is considered from several angles, including concerns about impression management, bias and discrimination, data protection and security. Ultimately, the current state of knowledge does not provide a definite answer as to whether information from social networks is helpful in recruitment and selection.
Resumo:
With increasing international mobility, higher education must cater to the varying linguistic and cultural needs of students. Successful delivery of courses through English as the vehicular language is essential to encourage international enrollment. However, this cannot be achieved without preparing university professors in the many intricacies delivering their subjects in English may pose. This paper aims to: share preliminary data concerning Content and Language Integrated Learning (CLIL) at Laureate Network Universities worldwide as few studies have been conducted at the tertiary level, reflect upon data regarding student and teacher satisfaction with CLIL at the Universidad Europea de Madrid (UEM), and to propose improvements in English-taught subjects.
Resumo:
Pyatt, F.B., Pyatt, A.J., Walker, C., Sheen, T., Grattan, J.P, The heavy metal content of skeletons from an ancient metalliferous polluted area in southern Jordan with particular reerence to bioaccumulation and human health, Ecotoxicology & Environmental Safety 60, 13th August 2003, 295-300
Resumo:
Ratcliffe, M. Thomas, L. Ellis, W. Thomasson, B. Capturing Collaborative Designs to Assist the Pedagogical Process.ACM SIGCSE Bulletin Volume 35 , Issue 3 (September 2003)
Resumo:
M.H. Lee, Q. Meng and F. Chao, 'A Content-Neutral Approach for Sensory-Motor Learning in Developmental Robotics', EpiRob'06: Sixth International Conference on Epigenetic Robotics, Paris, 55-62, 2006.
Resumo:
Sk?t, L., Humphreys, J., Humphreys, M. O., Thorogood, D., Gallagher, J. A., Sanderson, R., Armstead, I. P., Thomas, I. D. (2007). Association of candidate genes with flowering time and water-soluble carbohydrate content in Lolium perenne (L.). Genetics, 177 (1), 535-547. Sponsorship: BBSRC RAE2008
Resumo:
ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.
Resumo:
Content providers often consider the costs of security to be greater than the losses they might incur without it; many view "casual piracy" as their main concern. Our goal is to provide a low cost defense against such attacks while maintaining rigorous security guarantees. Our defense is integrated with and leverages fast forward error correcting codes, such as Tornado codes, which are widely used to facilitate reliable delivery of rich content. We tune one such family of codes - while preserving their original desirable properties - to guarantee that none of the original content can b e recovered whenever a key subset of encoded packets is missing. Ultimately we encrypt only these key codewords (only 4% of all transmissions), making the security overhead negligible.
Resumo:
Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize throughput of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmitted packet. We first make the case for transmitting encoded content in this scenario, arguing for the digital fountain approach which enables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial degree of application-level flexibility, as it seamlessly tolerates packet loss, connection migration, and parallel transfers. However, since the sets of symbols acquired by peers are likely to overlap substantially, care must be taken to enable them to collaborate effectively. We provide a collection of useful algorithmic tools for efficient estimation, summarization, and approximate reconciliation of sets of symbols between pairs of collaborating peers, all of which keep messaging complexity and computation to a minimum. Through simulations and experiments on a prototype implementation, we demonstrate the performance benefits of our informed content delivery mechanisms and how they complement existing overlay network architectures.
Resumo:
Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.
Resumo:
Dynamic service aggregation techniques can exploit skewed access popularity patterns to reduce the costs of building interactive VoD systems. These schemes seek to cluster and merge users into single streams by bridging the temporal skew between them, thus improving server and network utilization. Rate adaptation and secondary content insertion are two such schemes. In this paper, we present and evaluate an optimal scheduling algorithm for inserting secondary content in this scenario. The algorithm runs in polynomial time, and is optimal with respect to the total bandwidth usage over the merging interval. We present constraints on content insertion which make the overall QoS of the delivered stream acceptable, and show how our algorithm can satisfy these constraints. We report simulation results which quantify the excellent gains due to content insertion. We discuss dynamic scenarios with user arrivals and interactions, and show that content insertion reduces the channel bandwidth requirement to almost half. We also discuss differentiated service techniques, such as N-VoD and premium no-advertisement service, and show how our algorithm can support these as well.
Resumo:
We propose a new technique for efficiently delivering popular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate encodings of popular files, of which only a small sliding window is cached at any time instant, even to satisfy an unbounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our Cyclone server as a Linux kernel subsystem.
Resumo:
In many networked applications, independent caching agents cooperate by servicing each other's miss streams, without revealing the operational details of the caching mechanisms they employ. Inference of such details could be instrumental for many other processes. For example, it could be used for optimized forwarding (or routing) of one's own miss stream (or content) to available proxy caches, or for making cache-aware resource management decisions. In this paper, we introduce the Cache Inference Problem (CIP) as that of inferring the characteristics of a caching agent, given the miss stream of that agent. While CIP is insolvable in its most general form, there are special cases of practical importance in which it is, including when the request stream follows an Independent Reference Model (IRM) with generalized power-law (GPL) demand distribution. To that end, we design two basic "litmus" tests that are able to detect LFU and LRU replacement policies, the effective size of the cache and of the object universe, and the skewness of the GPL demand for objects. Using extensive experiments under synthetic as well as real traces, we show that our methods infer such characteristics accurately and quite efficiently, and that they remain robust even when the IRM/GPL assumptions do not hold, and even when the underlying replacement policies are not "pure" LFU or LRU. We exemplify the value of our inference framework by considering example applications.
Resumo:
This research interrogates the status of citizenship education in Irish secondary schools. The following questions are examined: How does school culture impact on citizenship education? What value is accorded to the subjects, Civic, Social and Political Education (CSPE) and Social, Personal and Health Education (SPHE)? To what extent are the subjects of both the cognitive and non-cognitive curricula affirmed? The importance of these factors in supporting the social, ethical, personal, political and emotional development of students is explored. The concept of citizenship is dynamic and constantly evolving in response to societal change. Society is increasingly concerned with issues such as: globalisation; cosmopolitanism; the threat of global risk; environment sustainability; socio-economic inequality; and recognition/misrecognition of new identities and group rights. The pedagogical philosophy of Paulo Freire which seeks to educate for the conscientisation and humanisation of the student is central to this research. Using a mixed methods approach, data on the insights of students, parents, teachers and school Principals was collected. In relation to Irish secondary school education, the study reached three main conclusions. (1) The educational stakeholders rate the subjects of the non-cognitive curriculum poorly. (2) The subjects Civic, Social and Political education (CSPE), and Social, Personal and Health Education (SPHE) command a low status in the secondary school setting. (3) The day-to-day school climate is influenced by an educational philosophy that is instrumentalist in character. Elements of school culture such as: the ethic of care; the informal curriculum; education for life after school; and affirmation of teachers, are not sufficiently prioritised in supporting education for citizenship. The research concludes that the approach to education for citizenship needs to be more robust within the overall curriculum, and culture and ethos of the Irish education system.