360 resultados para gravitational search algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim Frail older people typically suffer several chronic diseases, receive multiple medications and are more likely to be institutionalized in residential aged care facilities. In such patients, optimizing prescribing and avoiding use of high-risk medications might prevent adverse events. The present study aimed to develop a pragmatic, easily applied algorithm for medication review to help clinicians identify and discontinue potentially inappropriate high-risk medications. Methods The literature was searched for robust evidence of the association of adverse effects related to potentially inappropriate medications in older patients to identify high-risk medications. Prior research into the cessation of potentially inappropriate medications in older patients in different settings was synthesized into a four-step algorithm for incorporation into clinical assessment protocols for patients, particularly those in residential aged care facilities. Results The algorithm comprises several steps leading to individualized prescribing recommendations: (i) identify a high-risk medication; (ii) ascertain the current indications for the medication and assess their validity; (iii) assess if the drug is providing ongoing symptomatic benefit; and (iv) consider withdrawing, altering or continuing medications. Decision support resources were developed to complement the algorithm in ensuring a systematic and patient-centered approach to medication discontinuation. These include a comprehensive list of high-risk medications and the reasons for inappropriateness, lists of alternative treatments, and suggested medication withdrawal protocols. Conclusions The algorithm captures a range of different clinical scenarios in relation to potentially inappropriate medications, and offers an evidence-based approach to identifying and, if appropriate, discontinuing such medications. Studies are required to evaluate algorithm effects on prescribing decisions and patient outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Genetic variation contributes to the risk of developing endometriosis. This review summarizes gene mapping studies in endometriosis and the prospects of finding gene pathways contributing to disease using the latest genome-wide strategies. METHODS: To identify candidate-gene association studies of endometriosis, a systematic literature search was conducted in PubMed of publications up to 1 April 2008, using the search terms 'endometriosis' plus 'allele' or 'polymorphism' or 'gene'. Papers included were those with information on both case and control selection, showed allelic and/or genotypic results for named germ-line polymorphisms and were published in the English language. RESULTS: Genetic variants in 76 genes have been examined for association, but none shows convincing evidence of replication in multiple studies. There is evidence for genetic linkage to chromosomes 7 and 10, but the genes (or variants) in these regions contributing to disease risk have yet to be identified. Genome-wide association is a powerful method that has been successful in locating genetic variants contributing to a range of common diseases. Several groups are planning these studies in endometriosis. For this to be successful, the endometriosis research community must work together to genotype sufficient cases, using clearly defined disease classifications, and conduct the necessary replication studies in several thousands of cases and controls. CONCLUSIONS: Genes with convincing evidence for association with endometriosis are likely to be identified in large genome-wide studies. This will provide a starting point for functional and biological studies to develop better diagnosis and treatment for this debilitating disease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is a step forward in discovering knowledge from databases of complex structure like tree or graph. Several data mining algorithms are developed based on a novel representation called Balanced Optimal Search for extracting implicit, unknown and potentially useful information like patterns, similarities and various relationships from tree data, which are also proved to be advantageous in analysing big data. This thesis focuses on analysing unordered tree data, which is robust to data inconsistency, irregularity and swift information changes, hence, in the era of big data it becomes a popular and widely used data model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is an increased interest on the use of UAVs for environmental research such as tracking bush fires, volcanic eruptions, chemical accidents or pollution sources. The aim of this paper is to describe the theory and results of a bio-inspired plume tracking algorithm. A method for generating sparse plumes in a virtual environment was also developed. Results indicated the ability of the algorithms to track plumes in 2D and 3D. The system has been tested with hardware in the loop (HIL) simulations and in flight using a CO2 gas sensor mounted to a multi-rotor UAV. The UAV is controlled by the plume tracking algorithm running on the ground control station (GCS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Long-term unemployment of older people can have severe consequences for individuals, communities and ultimately economies, and is therefore a serious concern in countries with an ageing population. However, the interplay of chronological age and other individual difference characteristics in predicting older job seekers' job search is so far not well understood. This study investigated relationships among age, proactive personality, occupational future time perspective (FTP) and job search intensity of 182 job seekers between 43 and 77 years in Australia. Results were mostly consistent with expectations based on a combination of socio-emotional selectivity theory and the notion of compensatory psychological resources. Proactive personality was positively related to job search intensity and age was negatively related to job search intensity. Age moderated the relationship between proactive personality and job search intensity, such that the relationship was stronger at higher compared to lower ages. One dimension of occupational FTP (perceived remaining time left in the occupational context) mediated this moderating effect, but not the overall relationship between age and job search intensity. Implications for future research, including the interplay of occupational FTP and proactive personality, and some tentative practical implications are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims We combine measurements of weak gravitational lensing from the CFHTLS-Wide survey, supernovae Ia from CFHT SNLS and CMB anisotropies from WMAP5 to obtain joint constraints on cosmological parameters, in particular, the dark-energy equation-of-state parameter w. We assess the influence of systematics in the data on the results and look for possible correlations with cosmological parameters. Methods We implemented an MCMC algorithm to sample the parameter space of a flat CDM model with a dark-energy component of constant w. Systematics in the data are parametrised and included in the analysis. We determine the influence of photometric calibration of SNIa data on cosmological results by calculating the response of the distance modulus to photometric zero-point variations. The weak lensing data set is tested for anomalous field-to-field variations and a systematic shape measurement bias for high-redshift galaxies. Results Ignoring photometric uncertainties for SNLS biases cosmological parameters by at most 20% of the statistical errors, using supernovae alone; the parameter uncertainties are underestimated by 10%. The weak-lensing field-to-field variance between 1 deg2-MegaCam pointings is 5-15% higher than predicted from N-body simulations. We find no bias in the lensing signal at high redshift, within the framework of a simple model, and marginalising over cosmological parameters. Assuming a systematic underestimation of the lensing signal, the normalisation increases by up to 8%. Combining all three probes we obtain -0.10 < 1 + w < 0.06 at 68% confidence ( -0.18 < 1 + w < 0.12 at 95%), including systematic errors. Our results are therefore consistent with the cosmological constant . Systematics in the data increase the error bars by up to 35%; the best-fit values change by less than 0.15.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web data can often be represented in free tree form; however, free tree mining methods seldom exist. In this paper, a computationally fast algorithm FreeS is presented to discover all frequently occurring free subtrees in a database of labelled free trees. FreeS is designed using an optimal canonical form, BOCF that can uniquely represent free trees even during the presence of isomorphism. To avoid enumeration of false positive candidates, it utilises the enumeration approach based on a tree-structure guided scheme. This paper presents lemmas that introduce conditions to conform the generation of free tree candidates during enumeration. Empirical study using both real and synthetic datasets shows that FreeS is scalable and significantly outperforms (i.e. few orders of magnitude faster than) the state-of-the-art frequent free tree mining algorithms, HybridTreeMiner and FreeTreeMiner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length between $2^8$ and $2^{10}$ bits. Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sugarcane transport system plays a critical role in the overall performance of Australia’s sugarcane industry. An inefficient sugarcane transport system interrupts the raw sugarcane harvesting process, delays the delivery of sugarcane to the mill, deteriorates the sugar quality, increases the usage of empty bins, and leads to the additional sugarcane production costs. Due to these negative effects, there is an urgent need for an efficient sugarcane transport schedule that should be developed by the rail schedulers. In this study, a multi-objective model using mixed integer programming (MIP) is developed to produce an industry-oriented scheduling optimiser for sugarcane rail transport system. The exact MIP solver (IBM ILOG-CPLEX) is applied to minimise the makespan and the total operating time as multi-objective functions. Moreover, the so-called Siding neighbourhood search (SNS) algorithm is developed and integrated with Sidings Satisfaction Priorities (SSP) and Rail Conflict Elimination (RCE) algorithms to solve the problem in a more efficient way. In implementation, the sugarcane transport system of Kalamia Sugar Mill that is a coastal locality about 1050 km northwest of Brisbane city is investigated as a real case study. Computational experiments indicate that high-quality solutions are obtainable in industry-scale applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore how a standardization effort (i.e., when a firm pursues standards to further innovation) involves different search processes for knowledge and innovation outcomes. Using an inductive case study of Vanke, a leading Chinese property developer, we show how varying degrees of knowledge complexity and codification combine to produce a typology of four types of search process: active, integrative, decentralized and passive, resulting in four types of innovation outcome: modular, radical, incremental and architectural. We argue that when the standardization effort in a firm involves highly codified knowledge, incremental and architectural innovation outcomes are fostered, while modular and radical innovations are hindered. We discuss how standardization efforts can result in a second-order innovation capability, and conclude by calling for comparative research in other settings to understand how standardization efforts can be suited to different types of search process in different industry contexts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of UAVs for remote sensing tasks; e.g. agriculture, search and rescue is increasing. The ability for UAVs to autonomously find a target and perform on-board decision making, such as descending to a new altitude or landing next to a target is a desired capability. Computer-vision functionality allows the Unmanned Aerial Vehicle (UAV) to follow a designated flight plan, detect an object of interest, and change its planned path. In this paper we describe a low cost and an open source system where all image processing is achieved on-board the UAV using a Raspberry Pi 2 microprocessor interfaced with a camera. The Raspberry Pi and the autopilot are physically connected through serial and communicate via MAVProxy. The Raspberry Pi continuously monitors the flight path in real time through USB camera module. The algorithm checks whether the target is captured or not. If the target is detected, the position of the object in frame is represented in Cartesian coordinates and converted into estimate GPS coordinates. In parallel, the autopilot receives the target location approximate GPS and makes a decision to guide the UAV to a new location. This system also has potential uses in the field of Precision Agriculture, plant pest detection and disease outbreaks which cause detrimental financial damage to crop yields if not detected early on. Results show the algorithm is accurate to detect 99% of object of interest and the UAV is capable of navigation and doing on-board decision making.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For many, particularly in the Anglophone world and Western Europe, it may be obvious that Google has a monopoly over online search and advertising and that this is an undesirable state of affairs, due to Google's ability to mediate information flows online. The baffling question may be why governments and regulators are doing little to nothing about this situation, given the increasingly pivotal importance of the internet and free flowing communications in our lives. However, the law concerning monopolies, namely antitrust or competition law, works in what may be seen as a less intuitive way by the general public. Monopolies themselves are not illegal. Conduct that is unlawful, i.e. abuses of that market power, is defined by a complex set of rules and revolves principally around economic harm suffered due to anticompetitive behavior. However the effect of information monopolies over search, such as Google’s, is more than just economic, yet competition law does not address this. Furthermore, Google’s collection and analysis of user data and its portfolio of related services make it difficult for others to compete. Such a situation may also explain why Google’s established search rivals, Bing and Yahoo, have not managed to provide services that are as effective or popular as Google’s own (on this issue see also the texts by Dirk Lewandowski and Astrid Mager in this reader). Users, however, are not entirely powerless. Google's business model rests, at least partially, on them – especially the data collected about them. If they stop using Google, then Google is nothing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel crop detection system applied to the challenging task of field sweet pepper (capsicum) detection. The field-grown sweet pepper crop presents several challenges for robotic systems such as the high degree of occlusion and the fact that the crop can have a similar colour to the background (green on green). To overcome these issues, we propose a two-stage system that performs per-pixel segmentation followed by region detection. The output of the segmentation is used to search for highly probable regions and declares these to be sweet pepper. We propose the novel use of the local binary pattern (LBP) to perform crop segmentation. This feature improves the accuracy of crop segmentation from an AUC of 0.10, for previously proposed features, to 0.56. Using the LBP feature as the basis for our two-stage algorithm, we are able to detect 69.2% of field grown sweet peppers in three sites. This is an impressive result given that the average detection accuracy of people viewing the same colour imagery is 66.8%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research investigates techniques to analyse long duration acoustic recordings to help ecologists monitor birdcall activities. It designs a generalized algorithm to identify a broad range of bird species. It allows ecologists to search for arbitrary birdcalls of interest, rather than restricting them to just a very limited number of species on which the recogniser is trained. The algorithm can help ecologists find sounds of interest more efficiently by filtering out large volumes of unwanted sounds and only focusing on birdcalls.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The legality of the operation of Google’s search engine, and its liability as an Internet intermediary, has been tested in various jurisdictions on various grounds. In Australia, there was an ultimately unsuccessful case against Google under the Australian Consumer Law relating to how it presents results from its search engine. Despite this failed claim, several complex issues were not adequately addressed in the case including whether Google sufficiently distinguishes between the different parts of its search results page, so as not to mislead or deceive consumers. This article seeks to address this question of consumer confusion by drawing on empirical survey evidence of Australian consumers’ understanding of Google’s search results layout. This evidence, the first of its kind in Australia, indicates some level of consumer confusion. The implications for future legal proceedings in against Google in Australia and in other jurisdictions are discussed.