927 resultados para User Profiling
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Peer reviewed
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
This paper reports profiling information for speeding offenders and is part of a larger project that assessed the deterrent effects of increased speeding penalties in Queensland, Australia, using a total of 84,456 speeding offences. The speeding offenders were classified into three groups based on the extent and severity of an index offence: once-only low-rang offenders; repeat high-range offenders; and other offenders. The three groups were then compared in terms of personal characteristics, traffic offences, crash history and criminal history. Results revealed a number of significant differences between repeat high-range offenders and those in the other two offender groups. Repeat high-range speeding offenders were more likely to be male, younger, hold a provisional and a motorcycle licence, to have committed a range of previous traffic offences, to have a significantly greater likelihood of crash involvement, and to have been involved in multiple-vehicle crashes than drivers in the other two offender types. Additionally, when a subset of offenders’ criminal histories were examined, results revealed that repeat high-range speeding offenders were also more likely to have committed a previous criminal offence compared to once only low-range and other offenders and that 55.2% of the repeat high-range offenders had a criminal history. They were also significantly more likely to have committed drug offences and offences against order than the once only low-range speeding offenders, and significantly more likely to have committed regulation offences than those in the other offenders group. Overall, the results indicate that speeding offenders are not an homogeneous group and that, therefore, more tailored and innovative sanctions should be considered and evaluated for high-range recidivist speeders because they are a high-risk road user group.
Resumo:
The current state of the prefabricated housing market in Australia is systematically profiled, guided by a theoretical systems model. Particular focus is given to two original data collections. The first identifies manufacturers and builders using prefabrication innovations, and the second compares the context for prefabricated housing in Australia with that of key international jurisdictions. The results indicate a small but growing market for prefabricated housing in Australia, often building upon expertise developed through non-residential building applications. The international comparison highlighted the complexity of the interactions between macro policy decisions and historical influences and the uptake of prefabricated housing. The data suggest factors such as the small scale of the Australian market, and a lack of investment in research, development and training have not encouraged prefabrication. A lack of clear regulatory policy surrounding prefabricated housing is common both in Australia and internationally, with local effects in regards to home warranties and housing finance highlighted. Future research should target the continuing lack of consideration of prefabrication from within the housing construction industry, and build upon the research reported in this paper to further quantify the potential end user market and the continuing development of the industry.
Resumo:
In recommender systems based on multidimensional data, additional metadata provides algorithms with more information for better understanding the interaction between users and items. However, most of the profiling approaches in neighbourhood-based recommendation approaches for multidimensional data merely split or project the dimensional data and lack the consideration of latent interaction between the dimensions of the data. In this paper, we propose a novel user/item profiling approach for Collaborative Filtering (CF) item recommendation on multidimensional data. We further present incremental profiling method for updating the profiles. For item recommendation, we seek to delve into different types of relations in data to understand the interaction between users and items more fully, and propose three multidimensional CF recommendation approaches for top-N item recommendations based on the proposed user/item profiles. The proposed multidimensional CF approaches are capable of incorporating not only localized relations of user-user and/or item-item neighbourhoods but also latent interaction between all dimensions of the data. Experimental results show significant improvements in terms of recommendation accuracy.
Resumo:
Service oriented architecture is gaining momentum. However, in order to be successful, the proper and up-to-date description of services is required. Such a description may be provided by service profiling mechanisms, such as one presented in this article. Service profile can be defined as an up-to-date description of a subset of non-functional properties of a service. It allows for service comparison on the basis of non-functional parameters, and choosing the service which is most suited to the needs of a user. In this article the notion of a service profile along with service profiling mechanism is presented as well as the architecture of a profiling system. © 2006 IEEE.
Resumo:
Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.
Resumo:
Internet Tra c, Internet Applications, Internet Attacks, Tra c Pro ling, Multi-Scale Analysis abstract Nowadays, the Internet can be seen as an ever-changing platform where new and di erent types of services and applications are constantly emerging. In fact, many of the existing dominant applications, such as social networks, have appeared recently, being rapidly adopted by the user community. All these new applications required the implementation of novel communication protocols that present di erent network requirements, according to the service they deploy. All this diversity and novelty has lead to an increasing need of accurately pro ling Internet users, by mapping their tra c to the originating application, in order to improve many network management tasks such as resources optimization, network performance, service personalization and security. However, accurately mapping tra c to its originating application is a di cult task due to the inherent complexity of existing network protocols and to several restrictions that prevent the analysis of the contents of the generated tra c. In fact, many technologies, such as tra c encryption, are widely deployed to assure and protect the con dentiality and integrity of communications over the Internet. On the other hand, many legal constraints also forbid the analysis of the clients' tra c in order to protect their con dentiality and privacy. Consequently, novel tra c discrimination methodologies are necessary for an accurate tra c classi cation and user pro ling. This thesis proposes several identi cation methodologies for an accurate Internet tra c pro ling while coping with the di erent mentioned restrictions and with the existing encryption techniques. By analyzing the several frequency components present in the captured tra c and inferring the presence of the di erent network and user related events, the proposed approaches are able to create a pro le for each one of the analyzed Internet applications. The use of several probabilistic models will allow the accurate association of the analyzed tra c to the corresponding application. Several enhancements will also be proposed in order to allow the identi cation of hidden illicit patterns and the real-time classi cation of captured tra c. In addition, a new network management paradigm for wired and wireless networks will be proposed. The analysis of the layer 2 tra c metrics and the di erent frequency components that are present in the captured tra c allows an e cient user pro ling in terms of the used web-application. Finally, some usage scenarios for these methodologies will be presented and discussed.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
Media content distribution on-demand becomes more complex when performed on a mass scale involving various channels with distinct and dynamic network characteristics, and, deploying a variety of terminal devices offering a wide range of capabilities. It is practically impossible to create and prepackage various static versions of the same content to match all the varying demand parameters of clients for various contexts. In this paper we present a profiling management approach for dynamically personalised media content delivery on-demand integrated with the AXMEDIS Framework. The client profiles comprise the representation of User, Device, Network and Context of content delivery based on MPEG-21:DIA. Although the most challenging proving ground for this personalised content delivery has been the mobile testbed i.e. the distribution to mobile handsets, the framework described here can be deployed for disribution, by the AXMEDIS PnP module, through other channels e.g. satellite, Internet to a range of client terminals e.g. desktops, kiosks, IPtv and other terrminals whose baseline terminal capabilities can be made availabe by the manufacturers as is normal.