948 resultados para portale, monitoring, web usage mining


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Affiliation: Département de biochimie, Faculté de médecine, Université de Montréal

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Un nombre croissant de salariés ont aujourd’hui accès à l’Internet et à la messagerie électronique sur leur lieu de travail. Ils sont parfois tentés d’utiliser ces outils à des fins autres que professionnelles, ce qui constitue une source potentielle de conflits. En effet, sous prétexte d’assurer la protection de leurs biens et équipements, de vérifier que les salariés exécutent leurs obligations et de prévenir les risques de responsabilité, les entreprises contrôlent de plus en plus souvent – et parfois subrepticement – l’utilisation qui est faite des ressources ainsi fournies. Les employés, de leur côté, revendiquent leur droit à ce que leurs activités personnelles en ligne demeurent privées, même lorsqu’elles sont réalisées durant leur temps de travail et avec le matériel de l’employeur. Peuvent-ils raisonnablement voir leurs droits protégés, bien que le droit à la vie privée soit traditionnellement atténué en milieu de travail et que les entreprises aient accès à des technologies offrant des possibilités d’espionnage toujours plus intrusives? Comment trouver un équilibre viable entre le pouvoir de direction et de contrôle de l’employeur et les droits des salariés? Il s’agit d’une problématique à laquelle les tribunaux sont de plus en plus souvent confrontés et qui les amène régulièrement à réinterpréter les balises établies en matière de surveillance patronale, au regard des spécificités des technologies de l’information. Ce contexte conflictuel a également entraîné une évolution des pratiques patronales, dans la mesure où un nombre grandissant d’employeurs se dotent d’outils techniques et juridiques leur permettant de se protéger contre les risques, tout en s’aménageant un droit d’intrusion très large dans la vie privée des salariés.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For years, choosing the right career by monitoring the trends and scope for different career paths have been a requirement for all youngsters all over the world. In this paper we provide a scientific, data mining based method for job absorption rate prediction and predicting the waiting time needed for 100% placement, for different engineering courses in India. This will help the students in India in a great deal in deciding the right discipline for them for a bright future. Information about passed out students are obtained from the NTMIS ( National technical manpower information system ) NODAL center in Kochi, India residing in Cochin University of science and technology

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social bookmark tools are rapidly emerging on the Web. In such systems users are setting up lightweight conceptual structures called folksonomies. These systems provide currently relatively few structure. We discuss in this paper, how association rule mining can be adopted to analyze and structure folksonomies, and how the results can be used for ontology learning and supporting emergent semantics. We demonstrate our approach on a large scale dataset stemming from an online system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Co-training is a semi-supervised learning method that is designed to take advantage of the redundancy that is present when the object to be identified has multiple descriptions. Co-training is known to work well when the multiple descriptions are conditional independent given the class of the object. The presence of multiple descriptions of objects in the form of text, images, audio and video in multimedia applications appears to provide redundancy in the form that may be suitable for co-training. In this paper, we investigate the suitability of utilizing text and image data from the Web for co-training. We perform measurements to find indications of conditional independence in the texts and images obtained from the Web. Our measurements suggest that conditional independence is likely to be present in the data. Our experiments, within a relevance feedback framework to test whether a method that exploits the conditional independence outperforms methods that do not, also indicate that better performance can indeed be obtained by designing algorithms that exploit this form of the redundancy when it is present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Search engines - such as Google - have been characterized as "Databases of intentions". This class will focus on different aspects of intentionality on the web, including goal mining, goal modeling and goal-oriented search. Readings: M. Strohmaier, M. Lux, M. Granitzer, P. Scheir, S. Liaskos, E. Yu, How Do Users Express Goals on the Web? - An Exploration of Intentional Structures in Web Search, We Know'07 International Workshop on Collaborative Knowledge Management for Web Information Systems in conjunction with WISE'07, Nancy, France, 2007. [Web link] Readings: Automatic identification of user goals in web search, U. Lee and Z. Liu and J. Cho WWW '05: Proceedings of the 14th International World Wide Web Conference 391--400 (2005) [Web link]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

peaker(s): Jon Hare Organiser: Time: 25/06/2014 11:00-11:50 Location: B32/3077 Abstract The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised. In this talk I'll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. I'll will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El proyecto que se quiere plantear es la creación de una Plataforma electrónica a través de la cual se pretende agrupar a los diferentes proveedores que intervienen en la cadena de abastecimiento de la comunidad logística de San Antonio, abriendo la posibilidad de participación de empresas grandes y pequeñas y más aun promoviendo la creación de las mismas por parte de los ciudadanos de la región, de esta manera, se eliminan las brechas asimétricas existentes entre la oferta y la demanda permitiendo que las empresas medianas y grandes accedan a ofertas y transacciones con empresas proveedoras medianas y pequeñas. Dicho proyecto, tiene como objetivo general el consolidar los procesos de abastecimiento implementados por las empresas a través de la organización y estandarización de los mismos mediante el uso del portal, planteado en el presente proyecto, como apoyo tecnológico. Existen dos conceptos básicos a analizar de manera teórica dentro del proyecto, el primero de ellos es el de clúster logístico-portuario, lo cuales son reconocidos como instrumentos importantes para el progreso del desarrollo industrial, innovación, competitividad y crecimiento, tomando como ejemplos a los puertos de Valencia y Long Beach en la ciudad de Los Ángeles. El segundo concepto es el de E-Procurement, el cual se desarrolla siguiendo los pasos básicos de una cadena de abastecimiento tradicional, sin embargo, lo que genera un cambio real dentro de los procesos es el hecho que los procesos de cotización y seguimiento de proveedores se van a llevar a cabo a través de una plataforma electrónica con base a las evaluaciones que se llevan a cabo por parte de las empresas demandantes de los productos o servicios ofrecidos por las compañías proveedoras. (Renko, 2011) De la misma manera, se tomaran varios proyectos de e-procurement desarrollados a nivel mundial como base comparativa y de apoyo para el presente proyecto tales como: HYDRA: Es un sistema que tiene su soporte en la web, el cual es orientado “en el medio” lo cual lo hace un sistema con una arquitectura híbrida, que posee tanto un diseño en capas como una estructura comprensiva para desarrollar integración de negocios, colaboración y monitoreo en la gestión de la cadena de suministro (Renko, 2011) IPT: BidNet ha proporcionado servicios de oferta de información a miles de proveedores y compradores de bienes en el ámbito gubernamental por más de 25 años. (Bidnet, 2013) E-BUYPLACE: E-buyplace.com es el 1° especialista en SupplierRelationship Management que ha desarrollado un original y singular SRM 100% a través de Internet. (e-buyplace, 2013) RosettaNet: La iniciativa RosettaNet anima a optimizar los procesos de la cadena de suministro mediante el establecimiento, implementación y promoción de estándares abiertos en el mercado e-Business (AQS, Advance Quality Solutions, 2002)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract A frequent assumption in Social Media is that its open nature leads to a representative view of the world. In this talk we want to consider bias occurring in the Social Web. We will consider a case study of liquid feedback, a direct democracy platform of the German pirate party as well as models of (non-)discriminating systems. As a conclusion of this talk we stipulate the need of Social Media systems to bias their working according to social norms and to publish the bias they introduce. Speaker Biography: Prof Steffen Staab Steffen studied in Erlangen (Germany), Philadelphia (USA) and Freiburg (Germany) computer science and computational linguistics. Afterwards he worked as researcher at Uni. Stuttgart/Fraunhofer and Univ. Karlsruhe, before he became professor in Koblenz (Germany). Since March 2015 he also holds a chair for Web and Computer Science at Univ. of Southampton sharing his time between here and Koblenz. In his research career he has managed to avoid almost all good advice that he now gives to his team members. Such advise includes focusing on research (vs. company) or concentrating on only one or two research areas (vs. considering ontologies, semantic web, social web, data engineering, text mining, peer-to-peer, multimedia, HCI, services, software modelling and programming and some more). Though, actually, improving how we understand and use text and data is a good common denominator for a lot of Steffen's professional activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resumen basado en el de la publicaci??n

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many producers of geographic information are now disseminating their data using open web service protocols, notably those published by the Open Geospatial Consortium. There are many challenges inherent in running robust and reliable services at reasonable cost. Cloud computing provides a new kind of scalable infrastructure that could address many of these challenges. In this study we implement a Web Map Service for raster imagery within the Google App Engine environment. We discuss the challenges of developing GIS applications within this framework and the performance characteristics of the implementation. Results show that the application scales well to multiple simultaneous users and performance will be adequate for many applications, although concerns remain over issues such as latency spikes. We discuss the feasibility of implementing services within the free usage quotas of Google App Engine and the possibility of extending the approaches in this paper to other GIS applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital videophotography, computer image analysis and physical measurements have been used to monitor sedimentation rates, coral cover, genera richness, rugosity and estimated recruitment dates of massive corals at three different sites in the Wakatobi Marine National Park, Indonesia, and on the reefs around Discovery Bay, Jamaica. Semi-structured interviews with key stakeholders in the Wakatobi Marine National Park indicated that coral mining was extensively practised, and is responsible for the absence of large non-branching corals on the Sampela reef Blast fishing is also practised in the Wakatobi Marine Park, and the authors, together with students, showed that blast fishing resulted in coral bleaching and not mortality of two Porites lutea colonies. In addition, we showed that monitoring of bleaching in Porites colonies induced by blast fishing could be a useful way of monitoring blast fishing practices in susceptible areas in the Indo-Pacific. The techniques used in this study are appropriate for use by volunteers with sufficient training, and provide excellent projects for dissertation students reading undergraduate degrees.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.