547 resultados para least common subgraph algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Work-related driving crashes are the most common cause of work-related injury, death, and absence from work in Australia and overseas. Surprisingly however, limited attention has been given to initiatives designed to improve safety outcomes in the work-related driving setting. This research paper will present preliminary findings from a research project designed to examine the effects of increasing work-related driving safety discussions on the relationship between drivers and their supervisors and motivations to drive safely. The research project was conducted within a community nursing population, where 112 drivers were matched with 23 supervisors. To establish discussions between supervisors and drivers, safety sessions were conducted on a monthly basis with supervisors of the drivers. At these sessions, the researcher presented context specific, audio-based anti-speeding messages. Throughout the course of the intervention and following each of these safety sessions, supervisors were instructed to ensure that all drivers within their workgroup listened to each particular anti-speeding message at least once a fortnight. In addition to the message, supervisors were also encouraged to frequently promote the anti-speeding message through any contact they had with their drivers (i.e., face to face, email, SMS text, and/or paper based contact). Fortnightly discussions were subsequently held with drivers, whereby the researchers ascertained the number and type of discussions supervisors engaged in with their drivers. These discussions also assessed drivers’ perceptions of the group safety climate. In addition to the fortnightly discussion, drivers completed a daily speed reporting form which assessed the proportion of their driving day spent knowingly over the speed limit. As predicted, the results found that if supervisors reported a good safety climate prior to the intervention, increasing the number of safety discussions resulted in drivers reporting a high quality relationship (i.e., leader-member exchange) with their supervisor post intervention. In addition, if drivers reported a good safety climate, increasing the number of discussions resulted in increased motivation to drive safely post intervention. Motivations to drive safely prior to the intervention also predicted self-reported speeding over the subsequent three months of reporting. These results suggest safety discussions play an important role in improving the exchange between supervisors and their drivers and drivers’ subsequent motivation to drive safely and, in turn, self reported speeding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Decentralised sensor networks typically consist of multiple processing nodes supporting one or more sensors. These nodes are interconnected via wireless communication. Practical applications of Decentralised Data Fusion have generally been restricted to using Gaussian based approaches such as the Kalman or Information Filter This paper proposes the use of Parzen window estimates as an alternate representation to perform Decentralised Data Fusion. It is required that the common information between two nodes be removed from any received estimates before local data fusion may occur Otherwise, estimates may become overconfident due to data incest. A closed form approximation to the division of two estimates is described to enable conservative assimilation of incoming information to a node in a decentralised data fusion network. A simple example of tracking a moving particle with Parzen density estimates is shown to demonstrate how this algorithm allows conservative assimilation of network information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Common mode voltage generated by a power converter in combination with parasitic capacitive couplings is a potential source of shaft voltage in an AC motor drive system. In this paper, a three-phase motor drive system supplied with a single-phase AC-DC diode rectifier is investigated in order to reduce shaft voltage in a three-phase AC motor drive system. In this topology, the common mode voltage generated by the inverter is influenced by the AC-DC diode rectifier because the placement of the neutral point is changing in different rectifier circuit states. A pulse width modulation technique is presented by a proper placement of the zero vectors to reduce the common mode voltage level, which leads to a cost effective shaft voltage reduction technique without load current distortion, while keeping the switching frequency constant. Analysis and simulations have been presented to investigate the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the research focuses in the integer least squares problem is the decorrelation technique to reduce the number of integer parameter search candidates and improve the efficiency of the integer parameter search method. It remains as a challenging issue for determining carrier phase ambiguities and plays a critical role in the future of GNSS high precise positioning area. Currently, there are three main decorrelation techniques being employed: the integer Gaussian decorrelation, the Lenstra–Lenstra–Lovász (LLL) algorithm and the inverse integer Cholesky decorrelation (IICD) method. Although the performance of these three state-of-the-art methods have been proved and demonstrated, there is still a potential for further improvements. To measure the performance of decorrelation techniques, the condition number is usually used as the criterion. Additionally, the number of grid points in the search space can be directly utilized as a performance measure as it denotes the size of search space. However, a smaller initial volume of the search ellipsoid does not always represent a smaller number of candidates. This research has proposed a modified inverse integer Cholesky decorrelation (MIICD) method which improves the decorrelation performance over the other three techniques. The decorrelation performance of these methods was evaluated based on the condition number of the decorrelation matrix, the number of search candidates and the initial volume of search space. Additionally, the success rate of decorrelated ambiguities was calculated for all different methods to investigate the performance of ambiguity validation. The performance of different decorrelation methods was tested and compared using both simulation and real data. The simulation experiment scenarios employ the isotropic probabilistic model using a predetermined eigenvalue and without any geometry or weighting system constraints. MIICD method outperformed other three methods with conditioning improvements over LAMBDA method by 78.33% and 81.67% without and with eigenvalue constraint respectively. The real data experiment scenarios involve both the single constellation system case and dual constellations system case. Experimental results demonstrate that by comparing with LAMBDA, MIICD method can significantly improve the efficiency of reducing the condition number by 78.65% and 97.78% in the case of single constellation and dual constellations respectively. It also shows improvements in the number of search candidate points by 98.92% and 100% in single constellation case and dual constellations case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The 2003 Bureau of Labor Statistics American Time Use Survey (ATUS) contains 438 distinct primary activity variables that can be analyzed with regard to how time is spent by Americans. The Compendium of Physical Activities is used to code physical activities derived from various surveys, logs, diaries, etc to facilitate comparison of coded intensity levels across studies. ------ ----- Methods: This paper describes the methods, challenges, and rationale for linking Compendium estimates of physical activity intensity (METs, metabolic equivalents) with all activities reported in the 2003 ATUS. ----- ----- Results: The assigned ATUS intensity levels are not intended to compute the energy costs of physical activity in individuals. Instead, they are intended to be used to identify time spent in activities broadly classified by type and intensity. This function will complement public health surveillance systems and aid in policy and health-promotion activities. For example, at least one of the future projects of this process is the descriptive epidemiology of time spent in common physical activity intensity categories. ----- ----- Conclusions: The process of metabolic coding of the ATUS by linking it with the Compendium of Physical Activities can make important contributions to our understanding of Americans’ time spent in health-related physical activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Although there are recommendations for the management of osteoarthritis (OA), little is known about how people with OA actually manage this chronic condition. Purpose The aims of this study were to identify the non-pharmacological and pharmacological therapies most commonly used for the management of hip or knee OA, in a community-based sample of adults, and to compare these with evidence-based recommendations. Methods A questionnaire was mailed to 2200 adult members of Arthritis Queensland living in Brisbane, Australia. It included questions about OA symptoms, management therapies and demographic characteristics. Results Of the 485 participants (192 men, 293 women) with hip or knee OA who completed the questionnaire, most had mild to moderate symptoms. Ninety-six percent of participants (aged 27–95 years) reported using at least one non-pharmacological therapy, and 78% reported using at least one pharmacological therapy. The most common currently used non-pharmacological strategy was range-of-motion exercises (men 52%, women 61%, p=0.05) and the most common frequently used pharmacological strategy was glucosamine/chondroitin (men 51%, women 60%, ns). For the most highly recommended strategies, 65% of men and 54% of women had never attended an information/education course (p=0.04), and fewer than half (46% of women and 42% of men, p=0.03) were frequent users of anti-inflammatory agents. Conclusion The findings suggest that many people with knee or hip OA do not follow the most highly endorsed of the OARSI (Osteoarthritis Research Society International) recommendations for management of OA. Health professionals should be encouraged to recommend evidence-based therapies to their patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed pipeline assets systems are crucial to society. The deterioration of these assets and the optimal allocation of limited budget for their maintenance correspond to crucial challenges for water utility managers. Decision makers should be assisted with optimal solutions to select the best maintenance plan concerning available resources and management strategies. Much research effort has been dedicated to the development of optimal strategies for maintenance of water pipes. Most of the maintenance strategies are intended for scheduling individual water pipe. Consideration of optimal group scheduling replacement jobs for groups of pipes or other linear assets has so far not received much attention in literature. It is a common practice that replacement planners select two or three pipes manually with ambiguous criteria to group into one replacement job. This is obviously not the best solution for job grouping and may not be cost effective, especially when total cost can be up to multiple million dollars. In this paper, an optimal group scheduling scheme with three decision criteria for distributed pipeline assets maintenance decision is proposed. A Maintenance Grouping Optimization (MGO) model with multiple criteria is developed. An immediate challenge of such modeling is to deal with scalability of vast combinatorial solution space. To address this issue, a modified genetic algorithm is developed together with a Judgment Matrix. This Judgment Matrix is corresponding to various combinations of pipe replacement schedules. An industrial case study based on a section of a real water distribution network was conducted to test the new model. The results of the case study show that new schedule generated a significant cost reduction compared with the schedule without grouping pipes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Focusing on the conditions that an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. The principle is discussed in the theoretical model of DSZ algorithm, from which we present the practical model of DSZ algorithm. Practical model efficiency is demonstrated by the comparison with the similar algorithms such as Enhanced simulated annealing (ESA), Monte Carlo simulated annealing (MCS), Sniffer Global Optimization (SGO), Directed Tabu Search (DTS), and Genetic Algorithm (GA), using a set of well-known unconstrained and constrained optimization test cases. Meanwhile, further attention goes to the strategies how to optimize the high-dimensional unconstrained problem using DSZ algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Concern about skin cancer is a common reason for people from predominantly fair-skinned populations to present to primary care doctors. Objectives To examine the frequency and body-site distribution of malignant, pre-malignant and benign pigmented skin lesions excised in primary care. Methods This prospective study conducted in Queensland, Australia, included 154 primary care doctors. For all excised or biopsied lesions, doctors recorded the patient's age and sex, body site, level of patient pressure to excise, and the clinical diagnosis. Histological confirmation was obtained through pathology laboratories. Results Of 9650 skin lesions, 57·7% were excised in males and 75·0% excised in patients ≥50years. The most common diagnoses were basal cell carcinoma (BCC) (35·1%) and squamous cell carcinoma (SCC) (19·7%). Compared with the whole body, the highest densities for SCC, BCC and actinic keratoses were observed on chronically sun-exposed areas of the body including the face in males and females, the scalp and ears in males, and the hands in females. The density of BCC was also high on intermittently or rarely exposed body sites. Females, younger patients and patients with melanocytic naevi were significantly more likely to exert moderate/high levels of pressure on the doctor to excise. Conclusions More than half the excised lesions were skin cancer, which mostly occurred on the more chronically sun-exposed areas of the body. Information on the type and body-site distribution of skin lesions can aid in the diagnosis and planned management of skin cancer and other skin lesions commonly presented in primary care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fractures of long bones are sometimes treated using various types of fracture fixation devices including internal plate fixators. These are specialised plates which are used to bridge the fracture gap(s) whilst anatomically aligning the bone fragments. The plate is secured in position by screws. The aim of such a device is to support and promote the natural healing of the bone. When using an internal fixation device, it is necessary for the clinician to decide upon many parameters, for example, the type of plate and where to position it; how many and where to position the screws. While there have been a number of experimental and computational studies conducted regarding the configuration of screws in the literature, there is still inadequate information available concerning the influence of screw configuration on fracture healing. Because screw configuration influences the amount of flexibility at the area of fracture, it has a direct influence on the fracture healing process. Therefore, it is important that the chosen screw configuration does not inhibit the healing process. In addition to the impact on the fracture healing process, screw configuration plays an important role in the distribution of stresses in the plate due to the applied loads. A plate that experiences high stresses is prone to early failure. Hence, the screw configuration used should not encourage the occurrence of high stresses. This project develops a computational program in Fortran programming language to perform mathematical optimisation to determine the screw configuration of an internal fixation device within constraints of interfragmentary movement by minimising the corresponding stress in the plate. Thus, the optimal solution suggests the positioning and number of screws which satisfies the predefined constraints of interfragmentary movements. For a set of screw configurations the interfragmentary displacement and the stress occurring in the plate were calculated by the Finite Element Method. The screw configurations were iteratively changed and each time the corresponding interfragmentary displacements were compared with predefined constraints. Additionally, the corresponding stress was compared with the previously calculated stress value to determine if there was a reduction. These processes were continued until an optimal solution was achieved. The optimisation program has been shown to successfully predict the optimal screw configuration in two cases. The first case was a simplified bone construct whereby the screw configuration solution was comparable with those recommended in biomechanical literature. The second case was a femoral construct, of which the resultant screw configuration was shown to be similar to those used in clinical cases. The optimisation method and programming developed in this study has shown that it has potential to be used for further investigations with the improvement of optimisation criteria and the efficiency of the program.