360 resultados para gravitational search algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work described in this technical report is part of an ongoing project at QUT to build practical tools for the manipulation, analysis and visualisation of recordings of the natural environment. This report describes the algorithm we use to cluster the spectra in a spectrogram. The report begins with a brief description of the signal processing that prepares the spectrograms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a semi-supervised approach of anomaly detection in Online Social Networks. The social network is modeled as a graph and its features are extracted to detect anomaly. A clustering algorithm is then used to group users based on these features and fuzzy logic is applied to assign degree of anomalous behavior to the users of these clusters. Empirical analysis shows effectiveness of this method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the real world there are many problems in network of networks (NoNs) that can be abstracted to a so-called minimum interconnection cut problem, which is fundamentally different from those classical minimum cut problems in graph theory. Thus, it is desirable to propose an efficient and effective algorithm for the minimum interconnection cut problem. In this paper we formulate the problem in graph theory, transform it into a multi-objective and multi-constraint combinatorial optimization problem, and propose a hybrid genetic algorithm (HGA) for the problem. The HGA is a penalty-based genetic algorithm (GA) that incorporates an effective heuristic procedure to locally optimize the individuals in the population of the GA. The HGA has been implemented and evaluated by experiments. Experimental results have shown that the HGA is effective and efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we investigate experimentally whether people search optimally and how price promotions influence search behaviour. We implement a sequential search task with exogenous price dispersion in a baseline treatment and introduce discounts in two experimental treatments. We find that search behaviour is roughly consistent with optimal search but also observe some discount biases. If subjects do not know in advance where discounts are offered, the purchase probability is increased by 19 percentage points in shops with discounts, even after controlling for the benefit of the discount and for risk preferences. If consumers know in advance where discounts are given, then the bias is only weakly significant and much smaller (7 percentage points).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Participation in extreme sports is continuing to grow, yet there is still little understanding of participant motivations in such sports. The purpose of this paper is to report on one aspect of motivation in extreme sports, the search for freedom. The study utilized a hermeneutic phenomenological methodology. Fifteen international extreme sport participants who participated in sports such as BASE jumping, big wave surfing, extreme mountaineering, extreme skiing, rope free climbing and waterfall kayaking were interviewed about their experience of participating in an extreme sport. Results reveal six elements of freedom: freedom from constraints, freedom as movement, freedom as letting go of the need for control, freedom as the release of fear, freedom as being at one, and finally freedom as choice and responsibility. The findings reveal that motivations in extreme sport do not simply mirror traditional images of risk taking and adrenaline and that motivations in extreme sports also include an exploration of the ways in which humans seek fundamental human values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Migraine is a complex familial condition that imparts a significant burden on society. There is evidence for a role of genetic factors in migraine, and elucidating the genetic basis of this disabling condition remains the focus of much research. In this review we discuss results of genetic studies to date, from the discovery of the role of neural ion channel gene mutations in familial hemiplegic migraine (FHM) to linkage analyses and candidate gene studies in the more common forms of migraine. The success of FHM regarding discovery of genetic defects associated with the disorder remains elusive in common migraine, and causative genes have not yet been identified. Thus we suggest additional approaches for analysing the genetic basis of this disorder. The continuing search for migraine genes may aid in a greater understanding of the mechanisms that underlie the disorder and potentially lead to significant diagnostic and therapeutic applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entity-oriented retrieval aims to return a list of relevant entities rather than documents to provide exact answers for user queries. The nature of entity-oriented retrieval requires identifying the semantic intent of user queries, i.e., understanding the semantic role of query terms and determining the semantic categories which indicate the class of target entities. Existing methods are not able to exploit the semantic intent by capturing the semantic relationship between terms in a query and in a document that contains entity related information. To improve the understanding of the semantic intent of user queries, we propose concept-based retrieval method that not only automatically identifies the semantic intent of user queries, i.e., Intent Type and Intent Modifier but introduces concepts represented by Wikipedia articles to user queries. We evaluate our proposed method on entity profile documents annotated by concepts from Wikipedia category and list structure. Empirical analysis reveals that the proposed method outperforms several state-of-the-art approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expert searchers engage with information as information brokers, researchers, reference librarians, information architects, faculty who teach advanced search, and in a variety of other information-intensive professions. Their experiences are characterized by a profound understanding of information concepts and skills and they have an agile ability to apply this knowledge to interacting with and having an impact on the information environment. This study explored the learning experiences of searchers to understand the acquisition of search expertise. The research question was: What can be learned about becoming an expert searcher from the learning experiences of proficient novice searchers and highly experienced searchers? The key objectives were: (1) to explore the existence of threshold concepts in search expertise; (2) to improve our understanding of how search expertise is acquired and how novice searchers, intent on becoming experts, can learn to search in more expertlike ways. The participant sample drew from two population groups: (1) highly experienced searchers with a minimum of 20 years of relevant professional experience, including LIS faculty who teach advanced search, information brokers, and search engine developers (11 subjects); and (2) MLIS students who had completed coursework in information retrieval and online searching and demonstrated exceptional ability (9 subjects). Using these two groups allowed a nuanced understanding of the experience of learning to search in expertlike ways, with data from those who search at a very high level as well as those who may be actively developing expertise. The study used semi-structured interviews, search tasks with think-aloud narratives, and talk-after protocols. Searches were screen-captured with simultaneous audio-recording of the think-aloud narrative. Data were coded and analyzed using NVivo9 and manually. Grounded theory allowed categories and themes to emerge from the data. Categories represented conceptual knowledge and attributes of expert searchers. In accord with grounded theory method, once theoretical saturation was achieved, during the final stage of analysis the data were viewed through lenses of existing theoretical frameworks. For this study, threshold concept theory (Meyer & Land, 2003) was used to explore which concepts might be threshold concepts. Threshold concepts have been used to explore transformative learning portals in subjects ranging from economics to mathematics. A threshold concept has five defining characteristics: transformative (causing a shift in perception), irreversible (unlikely to be forgotten), integrative (unifying separate concepts), troublesome (initially counter-intuitive), and may be bounded. Themes that emerged provided evidence of four concepts which had the characteristics of threshold concepts. These were: information environment: the total information environment is perceived and understood; information structures: content, index structures, and retrieval algorithms are understood; information vocabularies: fluency in search behaviors related to language, including natural language, controlled vocabulary, and finesse using proximity, truncation, and other language-based tools. The fourth threshold concept was concept fusion, the integration of the other three threshold concepts and further defined by three properties: visioning (anticipating next moves), being light on one's 'search feet' (dancing property), and profound ontological shift (identity as searcher). In addition to the threshold concepts, findings were reported that were not concept-based, including praxes and traits of expert searchers. A model of search expertise is proposed with the four threshold concepts at its core that also integrates the traits and praxes elicited from the study, attributes which are likewise long recognized in LIS research as present in professional searchers. The research provides a deeper understanding of the transformative learning experiences involved in the acquisition of search expertise. It adds to our understanding of search expertise in the context of today's information environment and has implications for teaching advanced search, for research more broadly within library and information science, and for methodologies used to explore threshold concepts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the rapid growth of information on the Web, the study of information searching has let to an increased interest. Information behaviour (IB) researchers and information systems (IS) developers are continuously exploring user - Web search interactions to understand and to help users to provide assistance with their information searching. In attempting to develop models of IB, several studies have identified various factors that govern user's information searching and information retrieval (IR), such as age, gender, prior knowledge and task complexity. However, how users' contextual factors, such as cognitive styles, affect Web search interactions has not been clearly explained by the current models of Web Searching and IR. This study explores the influence of users' cognitive styles on their Web search behaviour. The main goal of the study is to enhance Web search models with a better understanding of how these cognitive styles affect Web searching. Modelling Web search behaviour with a greater understanding of user's cognitive styles can help information science researchers and IS designers to bridge the semantic gap between the user and the IS. To achieve the aims of the study, a user study with 50 participants was conducted. The study adopted a mixed method approach incorporating several data collection strategies to gather a range of qualitative and quantitative data. The study utilised pre-search and post-search questionnaires to collect the participants' demographic information and their level of satisfaction about the search interactions. Riding's (1991) Cognitive Style Analysis (CSA) test was used to assess the participants' cognitive styles. Participants completed three predesigned search tasks and the whole user - web search interactions, including thinkaloud, were captured using a monitoring program. Data analysis involved several qualitative and quantitative techniques: the quantitative data gave raise to detailed findings about users' Web searching and cognitive styles, the qualitative data enriched the findings with illustrative examples. The study results provide valuable insights into Web searching behaviour among different cognitive style users. The findings of the study extend our understanding of Web search behaviour and how users search information on the Web. Three key study findings emerged: • Users' Web search behaviour was demonstrated through information searching strategies, Web navigation styles, query reformulation behaviour and information processing approaches while performing Web searches. The manner in which these Web search patterns were demonstrated varied among the users with different cognitive style groups. • Users' cognitive styles influenced their information searching strategies, query reformulation behaviour, Web navigational styles and information processing approaches. Users with particular cognitive styles followed certain Web search patterns. • Fundamental relationships were evident between users' cognitive styles and their Web search behaviours; and these relationships can be illustrated through modelling Web search behaviour. Two models that depict the associations between Web search interactions, user characteristics and users' cognitive styles were developed. These models provide a greater understanding of Web search behaviour from the user perspective, particularly how users' cognitive styles influence their Web search behaviour. The significance of this research is twofold: it will provide insights for information science researchers, information system designers, academics, educators, trainers and librarians who want to better understand how users with different cognitive styles perform information searching on the Web; at the same time, it will provide assistance and support to the users. The major outcomes of this study are 1) a comprehensive analysis of how users search the Web; 2) extensive discussion on the implications of the models developed in this study for future work; and 3) a theoretical framework to bridge high-level search models and cognitive models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Application Specific Instruction-set Processor (ASIP) is a specialized processor tailored to run a particular application/s efficiently. However, when there are multiple candidate applications in the application’s domain it is difficult and time consuming to find optimum set of applications to be implemented. Existing ASIP design approaches perform this selection manually based on a designer’s knowledge. We help in cutting down the number of candidate applications by devising a classification method to cluster similar applications based on the special-purpose operations they share. This provides a significant reduction in the comparison overhead while resulting in customized ASIP instruction sets which can benefit a whole family of related applications. Our method gives users the ability to quantify the degree of similarity between the sets of shared operations to control the size of clusters. A case study involving twelve algorithms confirms that our approach can successfully cluster similar algorithms together based on the similarity of their component operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper introduces the design of robust current and voltage control algorithms for a grid-connected three-phase inverter which is interfaced to the grid through a high-bandwidth three-phase LCL filter. The algorithms are based on the state feedback control which have been designed in a systematic approach and improved by using oversampling to deal with the issues arising due to the high-bandwidth filter. An adaptive loop delay compensation method has also been adopted to minimize the adverse effects of loop delay in digital controller and to increase the robustness of the control algorithm in the presence of parameter variations. Simulation results are presented to validate the effectiveness of the proposed algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The assessment of choroidal thickness from optical coherence tomography (OCT) images of the human choroid is an important clinical and research task, since it provides valuable information regarding the eye’s normal anatomy and physiology, and changes associated with various eye diseases and the development of refractive error. Due to the time consuming and subjective nature of manual image analysis, there is a need for the development of reliable objective automated methods of image segmentation to derive choroidal thickness measures. However, the detection of the two boundaries which delineate the choroid is a complicated and challenging task, in particular the detection of the outer choroidal boundary, due to a number of issues including: (i) the vascular ocular tissue is non-uniform and rich in non-homogeneous features, and (ii) the boundary can have a low contrast. In this paper, an automatic segmentation technique based on graph-search theory is presented to segment the inner choroidal boundary (ICB) and the outer choroidal boundary (OCB) to obtain the choroid thickness profile from OCT images. Before the segmentation, the B-scan is pre-processed to enhance the two boundaries of interest and to minimize the artifacts produced by surrounding features. The algorithm to detect the ICB is based on a simple edge filter and a directional weighted map penalty, while the algorithm to detect the OCB is based on OCT image enhancement and a dual brightness probability gradient. The method was tested on a large data set of images from a pediatric (1083 B-scans) and an adult (90 B-scans) population, which were previously manually segmented by an experienced observer. The results demonstrate the proposed method provides robust detection of the boundaries of interest and is a useful tool to extract clinical data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the pure framing effect of price discounts, focusing on its impact on consumer search behavior. In a simple two-shop search experiment, we compare search behavior in base treatments (where both shops post net prices without discounts) to discount treatments (where either the first shop or the second shop posts gross prices with separate discount offers, keeping the net prices constant). Although the objective search problems are identical across treatments, subjects search less in discount frames, irrespective where the discount is offered. There is evidence showing that subjects only base their decisions on salient characteristics of the situation rather than on the objective price information.