900 resultados para recommender system, user profiling, personalization, implicit feedbacks


Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reports on a study of service users' views on Irish child protection services. Qualitative interviews were conducted with 67 service users, including young people between 13 and 23. The findings showed that despite refocusing and public service management reforms, service users still experience involvement with the services as intimidating and stressful and while they acknowledged opportunities to participate in the child protection process, they found the experience to be very difficult. Their definition of ‘needs’ was somewhat at odds with that suggested in official documentation, and they viewed the execution of a child protection plan more as a coercive requirement to comply with ‘tasks’ set by workers than a conjoint effort to enhance their children's welfare. As in previous studies, the data showed how the development of good relationships between workers and service users could compensate for the harsher aspects of involvement with child protection. In addition, this study demonstrated a high level of discernment on the part of service users, highlighting their expectation of quality standards in respect of courtesy, respect, accountability, transparency and practitioner expertise.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Popular approaches in human tissue-based biomarker discovery include tissue microarrays (TMAs) and DNA Microarrays (DMAs) for protein and gene expression profiling respectively. The data generated by these analytic platforms, together with associated image, clinical and pathological data currently reside on widely different information platforms, making searching and cross-platform analysis difficult. Consequently, there is a strong need to develop a single coherent database capable of correlating all available data types.

Method: This study presents TMAX, a database system to facilitate biomarker discovery tasks. TMAX organises a variety of biomarker discovery-related data into the database. Both TMA and DMA experimental data are integrated in TMAX and connected through common DNA/protein biomarkers. Patient clinical data (including tissue pathological data), computer assisted tissue image and associated analytic data are also included in TMAX to enable the truly high throughput processing of ultra-large digital slides for both TMAs and whole slide tissue digital slides. A comprehensive web front-end was built with embedded XML parser software and predefined SQL queries to enable rapid data exchange in the form of standard XML files.

Results & Conclusion: TMAX represents one of the first attempts to integrate TMA data with public gene expression experiment data. Experiments suggest that TMAX is robust in managing large quantities of data from different sources (clinical, TMA, DMA and image analysis). Its web front-end is user friendly, easy to use, and most importantly allows the rapid and easy data exchange of biomarker discovery related data. In conclusion, TMAX is a robust biomarker discovery data repository and research tool, which opens up the opportunities for biomarker discovery and further integromics research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Internet Tra c, Internet Applications, Internet Attacks, Tra c Pro ling, Multi-Scale Analysis abstract Nowadays, the Internet can be seen as an ever-changing platform where new and di erent types of services and applications are constantly emerging. In fact, many of the existing dominant applications, such as social networks, have appeared recently, being rapidly adopted by the user community. All these new applications required the implementation of novel communication protocols that present di erent network requirements, according to the service they deploy. All this diversity and novelty has lead to an increasing need of accurately pro ling Internet users, by mapping their tra c to the originating application, in order to improve many network management tasks such as resources optimization, network performance, service personalization and security. However, accurately mapping tra c to its originating application is a di cult task due to the inherent complexity of existing network protocols and to several restrictions that prevent the analysis of the contents of the generated tra c. In fact, many technologies, such as tra c encryption, are widely deployed to assure and protect the con dentiality and integrity of communications over the Internet. On the other hand, many legal constraints also forbid the analysis of the clients' tra c in order to protect their con dentiality and privacy. Consequently, novel tra c discrimination methodologies are necessary for an accurate tra c classi cation and user pro ling. This thesis proposes several identi cation methodologies for an accurate Internet tra c pro ling while coping with the di erent mentioned restrictions and with the existing encryption techniques. By analyzing the several frequency components present in the captured tra c and inferring the presence of the di erent network and user related events, the proposed approaches are able to create a pro le for each one of the analyzed Internet applications. The use of several probabilistic models will allow the accurate association of the analyzed tra c to the corresponding application. Several enhancements will also be proposed in order to allow the identi cation of hidden illicit patterns and the real-time classi cation of captured tra c. In addition, a new network management paradigm for wired and wireless networks will be proposed. The analysis of the layer 2 tra c metrics and the di erent frequency components that are present in the captured tra c allows an e cient user pro ling in terms of the used web-application. Finally, some usage scenarios for these methodologies will be presented and discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we discuss our research in developing general and systematic method for anomaly detection. The key ideas are to represent normal program behaviour using system call frequencies and to incorporate probabilistic techniques for classification to detect anomalies and intrusions. Using experiments on the sendmail system call data, we demonstrate that we can construct concise and accurate classifiers to detect anomalies. We provide an overview of the approach that we have implemented

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The JModel suite consists of a number of models of aspects of the Earth System. They can all be run from the JModels website. They are written in the Java language for maximum portability, and are capable of running on most computing platforms including Windows, MacOS and Unix/Linux. The models are controlled via graphical user interfaces (GUI), so no knowledge of computer programming is required to run them. The models currently available from the JModels website are: Ocean phosphorus cycle Ocean nitrogen and phosphorus cycles Ocean silicon and phosphorus cycles Ocean and atmosphere carbon cycle Energy radiation balance model (under development) The main purpose of the models is to investigate how material and energy cycles of the Earth system are regulated and controlled by different feedbacks. While the central focus is on these feedbacks and Earth System stabilisation, the models can also be used in other ways. These resources have been developed by: National Oceanography Centre, Southampton project led by Toby Tyrrell and Andrew Yool, focus on how the Earth system works.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The magnitude and direction of the coupled feedbacks between the biotic and abiotic components of the terrestrial carbon cycle is a major source of uncertainty in coupled climate–carbon-cycle models1, 2, 3. Materially closed, energetically open biological systems continuously and simultaneously allow the two-way feedback loop between the biotic and abiotic components to take place4, 5, 6, 7, but so far have not been used to their full potential in ecological research, owing to the challenge of achieving sustainable model systems6, 7. We show that using materially closed soil–vegetation–atmosphere systems with pro rata carbon amounts for the main terrestrial carbon pools enables the establishment of conditions that balance plant carbon assimilation, and autotrophic and heterotrophic respiration fluxes over periods suitable to investigate short-term biotic carbon feedbacks. Using this approach, we tested an alternative way of assessing the impact of increased CO2 and temperature on biotic carbon feedbacks. The results show that without nutrient and water limitations, the short-term biotic responses could potentially buffer a temperature increase of 2.3 °C without significant positive feedbacks to atmospheric CO2. We argue that such closed-system research represents an important test-bed platform for model validation and parameterization of plant and soil biotic responses to environmental changes.