912 resultados para Information searches
Resumo:
This thesis opens up the design space for awareness research in CSCW and HCI. By challenging the prevalent understanding of roles in awareness processes and exploring different mechanisms for actively engaging users in the awareness process, this thesis provides a better understanding of the complexity of these processes and suggests practical solutions for designing and implementing systems that support active awareness. Mutual awareness, a prominent research topic in the fields of Computer-Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI) refers to a fundamental aspect of a person’s work: their ability to gain a better understanding of a situation by perceiving and interpreting their co-workers actions. Technologically-mediated awareness, used to support co-workers across distributed settings, distinguishes between the roles of the actor, whose actions are often limited to being the target of an automated data gathering processes, and the receiver, who wants to be made aware of the actors’ actions. This receiver-centric view of awareness, focusing on helping receivers to deal with complex sets of awareness information, stands in stark contrast to our understanding of awareness as social process involving complex interactions between both actors and receivers. It fails to take into account an actors’ intimate understanding of their own activities and the contribution that this subjective understanding could make in providing richer awareness information. In this thesis I challenge the prevalent receiver-centric notion of awareness, and explore the conceptual foundations, design, implementation and evaluation of an alternative active awareness approach by making the following five contributions. Firstly, I identify the limitations of existing awareness research and solicit further evidence to support the notion of active awareness. I analyse ethnographic workplace studies that demonstrate how actors engage in an intricate interplay involving the monitoring of their co-workers progress and displaying aspects of their activities that may be of relevance to others. The examination of a large body of awareness research reveals that while disclosing information is a common practice in face-to-face collaborative settings it has been neglected in implementations of technically mediated awareness. Based on these considerations, I introduce the notion of intentional disclosure to describe the action of users actively and deliberately contributing awareness information. I consider challenges and potential solutions for the design of active awareness. I compare a range of systems, each allowing users to share information about their activities at various levels of detail. I discuss one of the main challenges to active awareness: that disclosing information about activities requires some degree of effort. I discuss various representations of effort in collaborative work. These considerations reveal that there is a trade-off between the richness of awareness information and the effort required to provide this information. I propose a framework for active awareness, aimed to help designers to understand the scope and limitations of different types of intentional disclosure. I draw on the identified richness/effort trade-off to develop two types of intentional disclosure, both of which aim to facilitate the disclosure of information while reducing the effort required to do so. For both of these approaches, direct and indirect disclosure, I delineate how they differ from related approaches and define a set of design criteria that is intended to guide their implementation. I demonstrate how the framework of active awareness can be practically applied by building two proof-of-concept prototypes that implement direct and indirect disclosure respectively. AnyBiff, implementing direct disclosure, allows users to create, share and use shared representations of activities in order to express their current actions and intentions. SphereX, implementing indirect disclosure, represents shared areas of interests or working context, and links sets of activities to these representations. Lastly, I present the results of the qualitative evaluation of the two prototypes and analyse the results with regard to the extent to which they implemented their respective disclosure mechanisms and supported active awareness. Both systems were deployed and tested in real world environments. The results for AnyBiff showed that users developed a wide range of activity representations, some unanticipated, and actively used the system to disclose information. The results further highlighted a number of design considerations relating to the relationship between awareness and communication, and the role of ambiguity. The evaluation of SphereX validated the feasibility of the indirect disclosure approach. However, the study highlighted the challenges of implementing cross-application awareness support and translating the concept to users. The study resulted in design recommendations aimed to improve the implementation of future systems.
Resumo:
Early works on Private Information Retrieval (PIR) focused on minimizing the necessary communication overhead. They seemed to achieve this goal but at the expense of query response time. To mitigate this weakness, protocols with secure coprocessors were introduced. They achieve optimal communication complexity and better online processing complexity. Unfortunately, all secure coprocessor-based PIR protocols require heavy periodical preprocessing. In this paper, we propose a new protocol, which is free from the periodical preprocessing while offering the optimal communication complexity and almost optimal online processing complexity. The proposed protocol is proven to be secure.
Resumo:
In the field of information retrieval (IR), researchers and practitioners are often faced with a demand for valid approaches to evaluate the performance of retrieval systems. The Cranfield experiment paradigm has been dominant for the in-vitro evaluation of IR systems. Alternative to this paradigm, laboratory-based user studies have been widely used to evaluate interactive information retrieval (IIR) systems, and at the same time investigate users’ information searching behaviours. Major drawbacks of laboratory-based user studies for evaluating IIR systems include the high monetary and temporal costs involved in setting up and running those experiments, the lack of heterogeneity amongst the user population and the limited scale of the experiments, which usually involve a relatively restricted set of users. In this paper, we propose an alternative experimental methodology to laboratory-based user studies. Our novel experimental methodology uses a crowdsourcing platform as a means of engaging study participants. Through crowdsourcing, our experimental methodology can capture user interactions and searching behaviours at a lower cost, with more data, and within a shorter period than traditional laboratory-based user studies, and therefore can be used to assess the performances of IIR systems. In this article, we show the characteristic differences of our approach with respect to traditional IIR experimental and evaluation procedures. We also perform a use case study comparing crowdsourcing-based evaluation with laboratory-based evaluation of IIR systems, which can serve as a tutorial for setting up crowdsourcing-based IIR evaluations.
Resumo:
Information technology (IT) plays a critical role of enabler of activities that improve the performance of business processes. This enabling role of IT resources means continuous investment in IT is a strategic necessity. It is established that organizations’ IT-related capabilities leverage the enabling potential of IT resources. Today’s turbulent and challenging business environment requires organizations to do more from their existing and newly acquired IT resources. To achieve this, organizations need to discover ways or establish environments to nourish their existing IT-related capabilities, and develop new IT-related capabilities. We suggest one such environment, a dynamic IT-learning environment that could contribute to nourishing existing IT-related capabilities, and developing new IT-related capabilities. This environment is a product of coordination of four organizational factors that relate to the ways in which IT-related knowledge is applied to business processes, the accompanying reward structures, and ways in which the IT-related learning and knowledge is shared within the organization. Using 216 field survey responses, this paper shows that two IT-related capabilities of top management commitment to IT initiatives, and shared organizational knowledge between the IT and business unit managers has a stronger positive influence on business process performance in the presence of this dynamic IT-learning environment. The study also shows that a marginal IT-related capability, technical IT skills, has a positive and significant influence on business process performance in the presence of this environment. These outcomes imply that organizations’ internal environments could contribute to the management of their IT-related capabilities.
Resumo:
Two sources of uncertainty in the X ray computed tomography imaging of polymer gel dosimeters are investigated in the paper.The first cause is a change in postirradiation density, which is proportional to the computed tomography signal and is associated with a volume change. The second cause of uncertainty is reconstruction noise.A simple technique that increases the residual signal to noise ratio by almost two orders of magnitude is examined.
Resumo:
We consider the following problem: users of an organization wish to outsource the storage of sensitive data to a large database server. It is assumed that the server storing the data is untrusted so the data stored have to be encrypted. We further suppose that the manager of the organization has the right to access all data, but a member of the organization can not access any data alone. The member must collaborate with other members to search for the desired data. In this paper, we investigate the notion of threshold privacy preserving keyword search (TPPKS) and define its security requirements. We construct a TPPKS scheme and show the proof of security under the assumptions of intractability of discrete logarithm, decisional Diffie-Hellman and computational Diffie-Hellman problems.
Resumo:
Moving cell fronts are an essential feature of wound healing, development and disease. The rate at which a cell front moves is driven, in part, by the cell motility, quantified in terms of the cell diffusivity $D$, and the cell proliferation rate �$\lambda$. Scratch assays are a commonly-reported procedure used to investigate the motion of cell fronts where an initial cell monolayer is scratched and the motion of the front is monitored over a short period of time, often less than 24 hours. The simplest way of quantifying a scratch assay is to monitor the progression of the leading edge. Leading edge data is very convenient since, unlike other methods, it is nondestructive and does not require labeling, tracking or counting individual cells amongst the population. In this work we study short time leading edge data in a scratch assay using a discrete mathematical model and automated image analysis with the aim of investigating whether such data allows us to reliably identify $D$ and $\lambda$�. Using a naıve calibration approach where we simply scan the relevant region of the ($D$;$\lambda$�) parameter space, we show that there are many choices of $D$ and $\lambda$� for which our model produces indistinguishable short time leading edge data. Therefore, without due care, it is impossible to estimate $D$ and $\lambda$� from this kind of data. To address this, we present a modified approach accounting for the fact that cell motility occurs over a much shorter time scale than proliferation. Using this information we divide the duration of the experiment into two periods, and we estimate $D$ using data from the first period, while we estimate �$\lambda$ using data from the second period. We confirm the accuracy of our approach using in silico data and a new set of in vitro data, which shows that our method recovers estimates of $D$ and $\lamdba$� that are consistent with previously-reported values except that that our approach is fast, inexpensive, nondestructive and avoids the need for cell labeling and cell counting.
Resumo:
We address the problem of finite horizon optimal control of discrete-time linear systems with input constraints and uncertainty. The uncertainty for the problem analysed is related to incomplete state information (output feedback) and stochastic disturbances. We analyse the complexities associated with finding optimal solutions. We also consider two suboptimal strategies that could be employed for larger optimization horizons.
Resumo:
In this paper, we explore how BIM functionalities together with novel management concepts and methods have been utilized in thirteen hospital projects in the United States and the United Kingdom. Secondary data collection and analysis were used as the method. Initial findings indicate that the utilization of BIM enables a holistic view of project delivery and helps to integrate project parties into a collaborative process. The initiative to implement BIM must come from the top down to enable early involvement of all key stakeholders. It seems that it is rather resistance from people to adapt to the new way of working and thinking than immaturity of technology that hinders the utilization of BIM.
Resumo:
Lean construction and building information modeling (BIM) are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, 56 interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers, and developers of information technology systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies.
Resumo:
This paper investigates the mutual relations of three current drivers of construction: lean construction, building information modelling and sustainability. These drivers are based on infrequently occurring changes, only incidentally simultaneous, in their respective domains. It is contended that the drivers are mutually supportive and thus synergistic. They are aligned in the sense that all require, promote or enable collaboration. It is argued that these three drivers should be implemented in a unified manner for rapid and robust improvements in construction industry performance and the quality of the constructed facilities and their benefits for stakeholders and wider society.
Resumo:
Building with Building Information Modelling (BIM) changes design and production processes. But can BIM be used to support process changes designed according to lean production and lean construction principles? To begin to answer this question we provide a conceptual analysis of the interaction of lean construction and BIM for improving construction. This was investigated by compiling a detailed listing of lean construction principles and BIM functionalities which are relevant from this perspective. These were drawn from a detailed literature survey. A research framework for analysis of the interaction between lean and BIM was then compiled. The goal of the framework is to both guide and stimulate research; as such, the approach adopted up to this point is constructive. Ongoing research has identified 55 such interactions, the majority of which show positive synergy between the two.
Resumo:
BACKGROUND Asthma severity and control can be measured both subjectively and objectively. Sputum analysis for evaluation of percentage of sputum eosinophilia directly measures airway inflammation, and is one method of objectively monitoring asthma. Interventions for asthma therapies have been traditionally based on symptoms and spirometry. OBJECTIVES To evaluate the efficacy of tailoring asthma interventions based on sputum analysis in comparison to clinical symptoms (with or without spirometry/peak flow) for asthma related outcomes in children and adults. SEARCH STRATEGY We searched the Cochrane Airways Group Specialised Register of Trials, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and reference lists of articles. The last search was on 31 October 2006. SELECTION CRITERIA All randomised controlled comparisons of adjustment of asthma therapy based on sputum eosinophils compared to traditional methods (primarily clinical symptoms and spirometry/peak flow). DATA COLLECTION AND ANALYSIS Results of searches were reviewed against pre-determined criteria for inclusion. Three sets of reviewers selected relevant studies.Two review authors independently assessed trial quality extracted data. Authors were contacted for further information but none were received. Data was analysed as "treatment received" and sensitivity analyses performed. MAIN RESULTS Three adult studies were included; these studies were clinically and methodologically heterogenous (use of medications, cut off for percentage of sputum eosinophils and definition of asthma exacerbation). There were no eligible paediatric studies. Of 246 participants randomised, 221 completed the trials. In the meta-analysis, a significant reduction in number of participants who had one or more asthma exacerbations occurred when treatment was based on sputum eosinophils in comparison to clinical symptoms; pooled odds ratio (OR) was 0.49 (95% CI 0.28 to 0.87); number needed to treat to benefit (NNTB) was 6 (95% CI 4 to 32).There were also differences between groups in the rate of exacerbation (any exacerbation per year) and severity of exacerbations defined by requirement for use of oral corticosteroids but the reduction in hospitalisations was not statistically significant. Data for clinical symptoms, quality of life and spirometry were not significantly different between groups. The mean dose of inhaled corticosteroids per day was similar in both groups and no adverse events were reported. However sputum induction was not always possible. AUTHORS' CONCLUSIONS Tailored asthma interventions based on sputum eosinophils is beneficial in reducing the frequency of asthma exacerbations in adults with asthma. This review supports the use of sputum eosinophils to tailor asthma therapy for adults with frequent exacerbations and severe asthma. Further studies need to be undertaken to strengthen these results and no conclusion can be drawn for children with asthma.
Resumo:
Background The measurement of severity and control of asthma in both children and adults can be based on subjective or objective measures. It has been advocated that fractional exhaled nitric oxide (FeNO) can be used to monitor airway inflammation as it correlates with some markers of asthma. Interventions for asthma therapies have been traditionally based on symptoms and/or spirometry. Objectives To evaluate the efficacy of tailoring asthma interventions based on exhaled nitric oxide in comparison to clinical symptoms (with or without spirometry/peak flow) for asthma related outcomes in children and adults. Search methods We searched the Cochrane Airways Group Specialised Register of Trials, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE and reference lists of articles. The last search was completed in February 2009. Selection criteria All randomised controlled comparisons of adjustment of asthma therapy based on exhaled nitric oxide compared to traditional methods (primarily clinical symptoms and spirometry/peak flow). Data collection and analysis Results of searches were reviewed against pre-determined criteria for inclusion. Relevant studies were independently selected in duplicate. Two authors independently assessed trial quality and extracted data. Authors were contacted for further information with response from one. Main results Two studies have been added for this update, which now includes six (2 adults and 4 children/adolescent) studies; these studies differed in a variety of ways including definition of asthma exacerbations, FeNO cut off levels, the way in which FeNO was used to adjust therapy and duration of study. Of 1053 participants randomised, 1010 completed the trials. In the meta-analysis, there was no significant difference between groups for the primary outcome of asthma exacerbations or for other outcomes (clinical symptoms, FeNO level and spirometry). In post-hoc analysis, a significant reduction in mean final daily dose inhaled corticosteroid per adult was found in the group where treatment was based on FeNO in comparison to clinical symptoms, (mean difference -450 mcg; 95% CI -677 to -223 mcg budesonide equivalent/day). However, the total amount of inhaled corticosteroid used in one of the adult studies was 11% greater in the FeNO arm. In contrast, in the paediatric studies, there was a significant increase in inhaled corticosteroid dose in the FeNO strategy arm (mean difference of 140 mcg; 95% CI 29 to 251, mcg budesonide equivalent/day). Authors' conclusions Tailoring the dose of inhaled corticosteroids based on exhaled nitric oxide in comparison to clinical symptoms was carried out in different ways in the six studies and found only modest benefit at best and potentially higher doses of inhaled corticosteroids in children. The role of utilising exhaled nitric oxide to tailor the dose of inhaled corticosteroids cannot be routinely recommended for clinical practice at this stage and remains uncertain.