288 resultados para Rule-based techniques
Resumo:
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
Resumo:
Railway capacity determination and expansion are very important topics. In prior research, the competition between different entities such as train services and train types, on different network corridors however have been ignored, poorly modelled, or else assumed to be static. In response, a comprehensive set of multi-objective models have been formulated in this article to perform a trade-off analysis. These models determine the total absolute capacity of railway networks as the most equitable solution according to a clearly defined set of competing objectives. The models also perform a sensitivity analysis of capacity with respect to those competing objectives. The models have been extensively tested on a case study and their significant worth is shown. The models were solved using a variety of techniques however an adaptive E constraint method was shown to be most superior. In order to identify only the best solution, a Simulated Annealing meta-heuristic was implemented and tested. However a linearization technique based upon separable programming was also developed and shown to be superior in terms of solution quality but far less in terms of computational time.
Resumo:
Background There is evidence that family and friends influence children's decisions to smoke. Objectives To assess the effectiveness of interventions to help families stop children starting smoking. Search methods We searched 14 electronic bibliographic databases, including the Cochrane Tobacco Addiction Group specialized register, MEDLINE, EMBASE, PsycINFO, CINAHL unpublished material, and key articles' reference lists. We performed free-text internet searches and targeted searches of appropriate websites, and hand-searched key journals not available electronically. We consulted authors and experts in the field. The most recent search was 3 April 2014. There were no date or language limitations. Selection criteria Randomised controlled trials (RCTs) of interventions with children (aged 5-12) or adolescents (aged 13-18) and families to deter tobacco use. The primary outcome was the effect of the intervention on the smoking status of children who reported no use of tobacco at baseline. Included trials had to report outcomes measured at least six months from the start of the intervention. Data collection and analysis We reviewed all potentially relevant citations and retrieved the full text to determine whether the study was an RCT and matched our inclusion criteria. Two authors independently extracted study data for each RCT and assessed them for risk of bias. We pooled risk ratios using a Mantel-Haenszel fixed effect model. Main results Twenty-seven RCTs were included. The interventions were very heterogeneous in the components of the family intervention, the other risk behaviours targeted alongside tobacco, the age of children at baseline and the length of follow-up. Two interventions were tested by two RCTs, one was tested by three RCTs and the remaining 20 distinct interventions were tested only by one RCT. Twenty-three interventions were tested in the USA, two in Europe, one in Australia and one in India. The control conditions fell into two main groups: no intervention or usual care; or school-based interventions provided to all participants. These two groups of studies were considered separately. Most studies had a judgement of 'unclear' for at least one risk of bias criteria, so the quality of evidence was downgraded to moderate. Although there was heterogeneity between studies there was little evidence of statistical heterogeneity in the results. We were unable to extract data from all studies in a format that allowed inclusion in a meta-analysis. There was moderate quality evidence family-based interventions had a positive impact on preventing smoking when compared to a no intervention control. Nine studies (4810 participants) reporting smoking uptake amongst baseline non-smokers could be pooled, but eight studies with about 5000 participants could not be pooled because of insufficient data. The pooled estimate detected a significant reduction in smoking behaviour in the intervention arms (risk ratio [RR] 0.76, 95% confidence interval [CI] 0.68 to 0.84). Most of these studies used intensive interventions. Estimates for the medium and low intensity subgroups were similar but confidence intervals were wide. Two studies in which some of the 4487 participants already had smoking experience at baseline did not detect evidence of effect (RR 1.04, 95% CI 0.93 to 1.17). Eight RCTs compared a combined family plus school intervention to a school intervention only. Of the three studies with data, two RCTS with outcomes for 2301 baseline never smokers detected evidence of an effect (RR 0.85, 95% CI 0.75 to 0.96) and one study with data for 1096 participants not restricted to never users at baseline also detected a benefit (RR 0.60, 95% CI 0.38 to 0.94). The other five studies with about 18,500 participants did not report data in a format allowing meta-analysis. One RCT also compared a family intervention to a school 'good behaviour' intervention and did not detect a difference between the two types of programme (RR 1.05, 95% CI 0.80 to 1.38, n = 388). No studies identified any adverse effects of intervention. Authors' conclusions There is moderate quality evidence to suggest that family-based interventions can have a positive effect on preventing children and adolescents from starting to smoke. There were more studies of high intensity programmes compared to a control group receiving no intervention, than there were for other compairsons. The evidence is therefore strongest for high intensity programmes used independently of school interventions. Programmes typically addressed family functioning, and were introduced when children were between 11 and 14 years old. Based on this moderate quality evidence a family intervention might reduce uptake or experimentation with smoking by between 16 and 32%. However, these findings should be interpreted cautiously because effect estimates could not include data from all studies. Our interpretation is that the common feature of the effective high intensity interventions was encouraging authoritative parenting (which is usually defined as showing strong interest in and care for the adolescent, often with rule setting). This is different from authoritarian parenting (do as I say) or neglectful or unsupervised parenting.
Resumo:
Structural Health Monitoring (SHM) schemes are useful for proper management of the performance of structures and for preventing their catastrophic failures. Vibration based SHM schemes has gained popularity during the past two decades resulting in significant research. It is hence evitable that future SHM schemes will include robust and automated vibration based damage assessment techniques (VBDAT) to detect, localize and quantify damage. In this context, the Damage Index (DI) method which is classified as non-model or output based VBDAT, has the ability to automate the damage assessment process without using a computer or numerical model along with actual measurements. Although damage assessment using DI methods have been able to achieve reasonable success for structures made of homogeneous materials such as steel, the same success level has not been reported with respect to Reinforced Concrete (RC) structures. The complexity of flexural cracks is claimed to be the main reason to hinder the applicability of existing DI methods in RC structures. Past research also indicates that use of a constant baseline throughout the damage assessment process undermines the potential of the Modal Strain Energy based Damage Index (MSEDI). To address this situation, this paper presents a novel method that has been developed as part of a comprehensive research project carried out at Queensland University of Technology, Brisbane, Australia. This novel process, referred to as the baseline updating method, continuously updates the baseline and systematically tracks both crack formation and propagation with the ability to automate the damage assessment process using output only data. The proposed method is illustrated through examples and the results demonstrate the capability of the method to achieve the desired outcomes.
Resumo:
In this paper we excite bound long range stripe plasmon modes with a highly focused laser beam. We demonstrate highly confined plasmons propagating along a 50 μm long silver stripe 750 nm wide and 30 nm thick. Two excitation techniques were studied: focusing the laser spot onto the waveguide end and focusing the laser spot onto a silver grating. By comparing the intensity of the out-coupling photons at the end of the stripe for both grating and end excitation we are able to show that gratings provide an increase of a factor of two in the output intensity and thus out-coupling of plasmons excited by this technique are easier to detect. Authors expect that the outcome of this paper will prove beneficial for the development of passive nano-optical devices based on stripe waveguides, by providing insight into the different excitation techniques available and the advantages of each technique.
Resumo:
This paper details the design and performance assessment of a unique collision avoidance decision and control strategy for autonomous vision-based See and Avoid systems. The general approach revolves around re-positioning a collision object in the image using image-based visual servoing, without estimating range or time to collision. The decision strategy thus involves determining where to move the collision object, to induce a safe avoidance manuever, and when to cease the avoidance behaviour. These tasks are accomplished by exploiting human navigation models, spiral motion properties, expected image feature uncertainty and the rules of the air. The result is a simple threshold based system that can be tuned and statistically evaluated by extending performance assessment techniques derived for alerting systems. Our results demonstrate how autonomous vision-only See and Avoid systems may be designed under realistic problem constraints, and then evaluated in a manner consistent to aviation expectations.
Resumo:
We incorporated a new Riemannian fluid registration algorithm into a general MRI analysis method called tensor-based morphometry to map the heritability of brain morphology in MR images from 23 monozygotic and 23 dizygotic twin pairs. All 92 3D scans were fluidly registered to a common template. Voxelwise Jacobian determinants were computed from the deformation fields to assess local volumetric differences across subjects. Heritability maps were computed from the intraclass correlations and their significance was assessed using voxelwise permutation tests. Lobar volume heritability was also studied using the ACE genetic model. The performance of this Riemannian algorithm was compared to a more standard fluid registration algorithm: 3D maps from both registration techniques displayed similar heritability patterns throughout the brain. Power improvements were quantified by comparing the cumulative distribution functions of the p-values generated from both competing methods. The Riemannian algorithm outperformed the standard fluid registration.
Resumo:
We report the synthesis of new protic ionic liquids (PILs) based on aniline derivatives and the use of high-throughput (HT) techniques to screen possible candidates. In this work, a simple HT method was applied to rapidly screen different aniline derivatives against different acids in order to identify possible combinations that produce PILs. This was followed by repeating the HT process with Chemspeed robotic synthesis platform for more accurate results. One of the successful combinations were then chosen to be synthesised on full scale for further analysis. The new PILs are of interest to the fields of ionic liquids, energy storage and especially, conducting polymers as they serve as solvents, electrolytes and monomers in the same time for possible electropolymerisation (i.e. a self-contained polymer precursor).
Resumo:
The sheep (Ovis aries) is favored by many musculoskeletal tissue engineering groups as a large animal model because of its docile temperament and ease of husbandry. The size and weight of sheep are comparable to humans, which allows for the use of implants and fixation devices used in human clinical practice. The construction of a complimentary DNA (cDNA) library can capture the expression of genes in both a tissue- and time-specific manner. cDNA libraries have been a consistent source of gene discovery ever since the technology became commonplace more than three decades ago. Here, we describe the construction of a cDNA library using cells derived from sheep bones based on the pBluescript cDNA kit. Thirty clones were picked at random and sequenced. This led to the identification of a novel gene, C12orf29, which our initial experiments indicate is involved in skeletal biology. We also describe a polymerase chain reaction-based cDNA clone isolation method that allows the isolation of genes of interest from a cDNA library pool. The techniques outlined here can be applied in-house by smaller tissue engineering groups to generate tools for biomolecular research for large preclinical animal studies and highlights the power of standard cDNA library protocols to uncover novel genes.
The new Vancouver Chest Pain Rule using troponin as the only biomarker: An external validation study
Resumo:
Objectives To externally evaluate the accuracy of the new Vancouver Chest Pain Rule and to assess the diagnostic accuracy using either sensitive or highly sensitive troponin assays. Methods Prospectively collected data from 2 emergency departments (EDs) in Australia and New Zealand were analysed. Based on the new Vancouver Chest Pain Rule, low-risk patients were identified using electrocardiogram results, cardiac history, nitrate use, age, pain characteristics and troponin results at 2 hours after presentation. The primary outcome was 30-day diagnosis of acute coronary syndrome (ACS), including acute myocardial infarction, and unstable angina. Sensitivity, specificity, positive predictive values and negative predictive values were calculated to assess the accuracy of the new Vancouver Chest Pain Rule using either sensitive or highly sensitive troponin assay results. Results Of the 1635 patients, 20.4% had an ACS diagnosis at 30 days. Using the highly sensitive troponin assay, 212 (13.0%) patients were eligible for early discharge with 3 patients (1.4%) diagnosed with ACS. Sensitivity was 99.1% (95% CI 97.4-99.7), specificity was 16.1 (95% CI 14.2-18.2), positive predictive values was 23.3 (95% CI 21.1-25.5) and negative predictive values was 98.6 (95% CI 95.9-99.5). The diagnostic accuracy of the rule was similar using the sensitive troponin assay. Conclusions The new Vancouver Chest Pain Rule should be used for the identification of low risk patients presenting to EDs with symptoms of possible ACS, and will reduce the proportion of patients requiring lengthy assessment; however we recommend further outpatient investigation for coronary artery disease in patients identified as low risk.
Resumo:
Arterial mechanical property may be a potential variable for risk stratification. Large artery and central arterial compliance have been shown not only to correlate well with overall cardiovascular outcome in large epidemiological studies [1, 2] but also to correlate with coronary atherosclerotic burden as measured by conventional angiography [3]. Until recently, real-time B-mode ultrasound combined with simultaneous blood pressure measurements have been used to assess large artery compliance [4]. These techniques have an excellent temporal resolution but are unable to provide adequate spatial resolution to determine changes in vessel area as opposed to diameter and make the assumption that the vessel is perfectly round. Attempts to use MR imaging to measure large artery compliance have been published previously [5]. However, they have not utilised simultaneous blood pressure measurements during sequence acquisition. We report a technique using regular and simultaneous blood pressure measurement during 2 dimensional phase contrast magnetic resonance imaging 2DPC-MRI to determine local carotid compliance.
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.
Resumo:
This paper presents a validation study on the application of a novel interslice interpolation technique for musculoskeletal structure segmentation of articulated joints and muscles on human magnetic resonance imaging data. The interpolation technique is based on morphological shape-based interpolation combined with intensity based voxel classification. Shape-based interpolation in the absence of the original intensity image has been investigated intensively. However, in some applications of medical image analysis, the intensity image of the slice to be interpolated is available. For example, when manual segmentation is conducted on selected slices, the segmentation on those unselected slices can be obtained by interpolation. We proposed a two- step interpolation method to utilize both the shape information in the manual segmentation and local intensity information in the image. The method was tested on segmentations of knee, hip and shoulder joint bones and hamstring muscles. The results were compared with two existing interpolation methods. Based on the calculated Dice similarity coefficient and normalized error rate, the proposed method outperformed the other two methods.
Resumo:
There is now a widespread recognition of the importance of mental imagery in a range of clinical disorders (1). This provides the potential for a transdiagnostic route to integrate some aspects of these disorders and their treatment within a common framework. This opinion piece argues that we need to understand why imagery is such a central and recurring feature, if we are to progress theories of the origin and maintenance of disorders. This will aid us in identifying therapeutic techniques that are not simply targeting imagery as a symptom, but as a manifestation of an underlying problem. As papers in this issue highlight, imagery is a central feature across many clinical disorders, but has been ascribed varying roles. For example, the involuntary occurrence of traumatic memories is a diagnostic criterion for PTSD (2), and it has been suggested that multisensory imagery of traumatic events normally serves a functional role in allowing the individual to reappraise the situation (3), but that this re-appraisal is disabled by extreme affective responses. In contrast to the disabling flashbacks associated with PTSD, depressed adults who experience suicidal ideation often report “flash forward” imagery related to suicidal acts (4), motivating them to self-harm. Socially anxious individuals who engage in visual imagery about giving a talk in public become more anxious and make more negative predictions about future performance than others who engage in more abstract, semantic processing of the past event (5). People with Obsessive Compulsive Disorder (OCD) frequently report imagery of past adverse events, and imagery seems to be associated with severity (6). The content of intrusive imagery has been related to psychotic symptoms (7), including visual images of the catastrophic fears associated with paranoia and persecution. Imagery has been argued (8) to play a role in the maintenance of psychosis through negative appraisals of imagined voices, misattribution of sensations to external sources, by the induction of negative mood states that trigger voices, and through maintenance of negative schemas. In addiction and substance dependence, Elaborated Intrusion (EI) Theory (9, 10) emphasizes the causal role that imagery plays in substance use, through its role in motivating an individual to pursue goals directed toward achieving the pleasurable outcomes associated with substance use...
Resumo:
Sampling design is critical to the quality of quantitative research, yet it does not always receive appropriate attention in nursing research. The current article details how balancing probability techniques with practical considerations produced a representative sample of Australian nursing homes (NHs). Budgetary, logistical, and statistical constraints were managed by excluding some NHs (e.g., those too difficult to access) from the sampling frame; a stratified, random sampling methodology yielded a final sample of 53 NHs from a population of 2,774. In testing the adequacy of representation of the study population, chi-square tests for goodness of fit generated nonsignificant results for distribution by distance from major city and type of organization. A significant result for state/territory was expected and was easily corrected for by the application of weights. The current article provides recommendations for conducting high-quality, probability-based samples and stresses the importance of testing the representativeness of achieved samples.