547 resultados para least common subgraph algorithm
Resumo:
Phenols are well known noxious compounds, which are often found in various water sources. A novel analytical method has been researched and developed based on the properties of hemin–graphene hybrid nanosheets (H–GNs). These nanosheets were synthesized using a wet-chemical method, and they have peroxidase-like activity. Also, in the presence of H2O2, the nanosheets are efficient catalysts for the oxidation of the substrate, 4-aminoantipine (4-AP), and the phenols. The products of such an oxidation reaction are the colored quinone-imines (benzodiazepines). Importantly, these products enabled the differentiation of the three common phenols – pyrocatechol, resorcin and hydroquinone, with the use of a novel, spectroscopic method, which was developed for the simultaneous determination of the above three analytes. This spectroscopic method produced linear calibrations for the pyrocatechol (0.4–4.0 mg L−1), resorcin (0.2–2.0 mg L−1) and hydroquinone (0.8–8.0 mg L−1) analytes. In addition, kinetic and spectral data, obtained from the formation of the colored benzodiazepines, were used to establish multi-variate calibrations for the prediction of the three phenol analytes found in various kinds of water; partial least squares (PLS), principal component regression (PCR) and artificial neural network (ANN) models were used and the PLS model performed best.
Resumo:
There is an increased interest on the use of UAVs for environmental research and to track bush fire plumes, volcanic plumes or pollutant sources. The aim of this paper is to describe the theory and results of a bio-inspired plume tracking algorithm. A memory based and gradient based approach, were developed and compared. A method for generating sparse plumes was also developed. Results indicate the ability of the algorithms to track plumes in 2D and 3D.
Resumo:
This paper relates to the importance of impact of the chosen bottle-point method when conducting ion exchange equilibria experiments. As an illustration, potassium ion exchange with strong acid cation resin was investigated due to its relevance to the treatment of various industrial effluents and groundwater. The “constant mass” bottle-point method was shown to be problematic in that depending upon the resin mass used the equilibrium isotherm profiles were different. Indeed, application of common equilibrium isotherm models revealed that the optimal fit could be with either the Freundlich or Temkin equations, depending upon the conditions employed. It could be inferred that the resin surface was heterogeneous in character, but precise conclusions regarding the variation in the heat of sorption were not possible. Estimation of the maximum potassium loading was also inconsistent when employing the “constant mass” method. The “constant concentration” bottle-point method illustrated that the Freundlich model was a good representation of the exchange process. The isotherms recorded were relatively consistent when compared to the “constant mass” approach. Unification of all the equilibrium isotherm data acquired was achieved by use of the Langmuir Vageler expression. The maximum loading of potassium ions was predicted to be at least 116.5 g/kg resin.
Resumo:
Heavy metals that are built-up on urban impervious surfaces such as roads are transported to urban water resources through stormwater runoff. Therefore, it is essential to understand the predominant pathways of heavy metals to the build-up on roads in order to develop suitable pollution mitigation strategies to protect the receiving water environment. The study presented in this paper investigated the sources and transport pathways of manganese, lead, copper, zinc and chromium, which are heavy metals commonly present in urban road build-up. It was found that manganese and lead are contributed to road build-up primarily by direct deposition due to the re-suspension of roadside soil by wind turbulence, while traffic is the predominant source of copper, zinc and chromium to the atmosphere and road build-up. Atmospheric deposition is also the major transport pathway for copper and zinc, and for chromium, direct deposition by traffic sources is the predominant pathway.
Resumo:
A staged crime scene involves deliberate alteration of evidence by the offender to simulate events that did not occur for the purpose of misleading authorities (Geberth, 2006; Turvey, 2000). This study examined 115 staged homicides from the USA to determine common elements; victim and perpetrator characteristics; and specific features of different types of staged scenes. General characteristics include: multiple victims and offenders; a previous relationship be- tween parties involved; and victims discovered in their own home, often by the offender. Staged scenes were separated by type with staged burglaries, suicides, accidents, and car accidents examined in more detail. Each type of scene displays differently with separate indicators and common features. Features of staged burglaries were: no points of entry/exit staged; non-valuables taken; scene ransacking; offender self- injury; and offenders bringing weapons to the scene. Features of staged suicides included: weapon arrangement and simulating self-injury to the victim; rearranging the body; and removing valuables. Examples of elements of staged accidents were arranging the implement/weapon and re- positioning the deceased; while staged car accidents involved: transporting the body to the vehicle and arranging both; mutilation after death; attempts to secure an alibi; and clean up at the primary crime scene. The results suggest few staging behaviors are used, despite the credibility they may have offered the façade. This is the first peer-reviewed, published study to examine the specific features of these scenes, and is the largest sample studied to date.
Resumo:
An experimental study has been performed to investigate the ignition delay of a modern heavy-duty common-rail diesel engine run with fumigated ethanol substitutions up to 40% on an energy basis. The ignition delay was determined through the use of statistical modelling in a Bayesian framework this framework allows for the accurate determination of the start of combustion from single consecutive cycles and does not require any differentiation of the in-cylinder pressure signal. At full load the ignition delay has been shown to decrease with increasing ethanol substitutions and evidence of combustion with high ethanol substitutions prior to diesel injection have also been shown experimentally and by modelling. Whereas, at half load increasing ethanol substitutions have increased the ignition delay. A threshold absolute air to fuel ratio (mole basis) of above ~110 for consistent operation has been determined from the inter-cycle variability of the ignition delay, a result that agrees well with previous research of other in-cylinder parameters and further highlights the correlation between the air to fuel ratio and inter-cycle variability. Numerical modelling to investigate the sensitivity of ethanol combustion has also been performed. It has been shown that ethanol combustion is sensitive to the initial air temperature around the feasible operating conditions of the engine. Moreover, a negative temperature coefficient region of approximately 900{1050 K (the approximate temperature at fuel injection) has been shown with for n-heptane and n-heptane/ethanol blends in the numerical modelling. A consequence of this is that the dominate effect influencing the ignition delay under increasing ethanol substitutions may rather be from an increase in chemical reactions and not from in-cylinder temperature. Further investigation revealed that the chemical reactions at low ethanol substitutions are different compared to the high (> 20%) ethanol substitutions.
Resumo:
This paper presents an improved field weakening algorithm for synchronous reluctance motor (RSMs) drives. The proposed algorithm is robust to the variations in the machine d- and q-axes inductances. The transition between the maximum torque per ampere (MTPA), current and voltage limits as well as the maximum torque per flux (MTPF) trajectories is smooth. The proposed technique is combined with the direct torque control method to attain a high performance drive in the field weakening region. Simulation and experimental results are supplemented to verify the effectiveness of the proposed approach.
Resumo:
Background: Malaria rapid diagnostic tests (RDTs) are appropriate for case management, but persistent antigenaemia is a concern for HRP2-detecting RDTs in endemic areas. It has been suggested that pan-pLDH test bands on combination RDTs could be used to distinguish persistent antigenaemia from active Plasmodium falciparum infection, however this assumes all active infections produce positive results on both bands of RDTs, an assertion that has not been demonstrated. Methods: In this study, data generated during the WHO-FIND product testing programme for malaria RDTs was reviewed to investigate the reactivity of individual test bands against P. falciparum in 18 combination RDTs. Each product was tested against multiple wild-type P. falciparum only samples. Antigen levels were measured by quantitative ELISA for HRP2, pLDH and aldolase. Results: When tested against P. falciparum samples at 200 parasites/μL, 92% of RDTs were positive; 57% of these on both the P. falciparum and pan bands, while 43% were positive on the P. falciparum band only. There was a relationship between antigen concentration and band positivity; ≥4 ng/mL of HRP2 produced positive results in more than 95% of P. falciparum bands, while ≥45 ng/mL of pLDH was required for at least 90% of pan bands to be positive. Conclusions: In active P. falciparum infections it is common for combination RDTs to return a positive HRP2 band combined with a negative pan-pLDH band, and when both bands are positive, often the pan band is faint. Thus active infections could be missed if the presence of a HRP2 band in the absence of a pan band is interpreted as being caused solely by persistent antigenaemia.
Resumo:
Water to air methane emissions from freshwater reservoirs can be dominated by sediment bubbling (ebullitive) events. Previous work to quantify methane bubbling from a number of Australian sub-tropical reservoirs has shown that this can contribute as much as 95% of total emissions. These bubbling events are controlled by a variety of different factors including water depth, surface and internal waves, wind seiching, atmospheric pressure changes and water levels changes. Key to quantifying the magnitude of this emission pathway is estimating both the bubbling rate as well as the areal extent of bubbling. Both bubbling rate and areal extent are seldom constant and require persistent monitoring over extended time periods before true estimates can be generated. In this paper we present a novel system for persistent monitoring of both bubbling rate and areal extent using multiple robotic surface chambers and adaptive sampling (grazing) algorithms to automate the quantification process. Individual chambers are self-propelled and guided and communicate between each other without the need for supervised control. They can maintain station at a sampling site for a desired incubation period and continuously monitor, record and report fluxes during the incubation. To exploit the methane sensor detection capabilities, the chamber can be automatically lowered to decrease the head-space and increase concentration. The grazing algorithms assign a hierarchical order to chambers within a preselected zone. Chambers then converge on the individual recording the highest 15 minute bubbling rate. Individuals maintain a specified distance apart from each other during each sampling period before all individuals are then required to move to different locations based on a sampling algorithm (systematic or adaptive) exploiting prior measurements. This system has been field tested on a large-scale subtropical reservoir, Little Nerang Dam, and over monthly timescales. Using this technique, localised bubbling zones on the water storage were found to produce over 50,000 mg m-2 d-1 and the areal extent ranged from 1.8 to 7% of the total reservoir area. The drivers behind these changes as well as lessons learnt from the system implementation are presented. This system exploits relatively cheap materials, sensing and computing and can be applied to a wide variety of aquatic and terrestrial systems.
Resumo:
Smart Card Automated Fare Collection (AFC) data has been extensively exploited to understand passenger behavior, passenger segment, trip purpose and improve transit planning through spatial travel pattern analysis. The literature has been evolving from simple to more sophisticated methods such as from aggregated to individual travel pattern analysis, and from stop-to-stop to flexible stop aggregation. However, the issue of high computing complexity has limited these methods in practical applications. This paper proposes a new algorithm named Weighted Stop Density Based Scanning Algorithm with Noise (WS-DBSCAN) based on the classical Density Based Scanning Algorithm with Noise (DBSCAN) algorithm to detect and update the daily changes in travel pattern. WS-DBSCAN converts the classical quadratic computation complexity DBSCAN to a problem of sub-quadratic complexity. The numerical experiment using the real AFC data in South East Queensland, Australia shows that the algorithm costs only 0.45% in computation time compared to the classical DBSCAN, but provides the same clustering results.
Resumo:
Background There is evidence that family and friends influence children's decisions to smoke. Objectives To assess the effectiveness of interventions to help families stop children starting smoking. Search methods We searched 14 electronic bibliographic databases, including the Cochrane Tobacco Addiction Group specialized register, MEDLINE, EMBASE, PsycINFO, CINAHL unpublished material, and key articles' reference lists. We performed free-text internet searches and targeted searches of appropriate websites, and hand-searched key journals not available electronically. We consulted authors and experts in the field. The most recent search was 3 April 2014. There were no date or language limitations. Selection criteria Randomised controlled trials (RCTs) of interventions with children (aged 5-12) or adolescents (aged 13-18) and families to deter tobacco use. The primary outcome was the effect of the intervention on the smoking status of children who reported no use of tobacco at baseline. Included trials had to report outcomes measured at least six months from the start of the intervention. Data collection and analysis We reviewed all potentially relevant citations and retrieved the full text to determine whether the study was an RCT and matched our inclusion criteria. Two authors independently extracted study data for each RCT and assessed them for risk of bias. We pooled risk ratios using a Mantel-Haenszel fixed effect model. Main results Twenty-seven RCTs were included. The interventions were very heterogeneous in the components of the family intervention, the other risk behaviours targeted alongside tobacco, the age of children at baseline and the length of follow-up. Two interventions were tested by two RCTs, one was tested by three RCTs and the remaining 20 distinct interventions were tested only by one RCT. Twenty-three interventions were tested in the USA, two in Europe, one in Australia and one in India. The control conditions fell into two main groups: no intervention or usual care; or school-based interventions provided to all participants. These two groups of studies were considered separately. Most studies had a judgement of 'unclear' for at least one risk of bias criteria, so the quality of evidence was downgraded to moderate. Although there was heterogeneity between studies there was little evidence of statistical heterogeneity in the results. We were unable to extract data from all studies in a format that allowed inclusion in a meta-analysis. There was moderate quality evidence family-based interventions had a positive impact on preventing smoking when compared to a no intervention control. Nine studies (4810 participants) reporting smoking uptake amongst baseline non-smokers could be pooled, but eight studies with about 5000 participants could not be pooled because of insufficient data. The pooled estimate detected a significant reduction in smoking behaviour in the intervention arms (risk ratio [RR] 0.76, 95% confidence interval [CI] 0.68 to 0.84). Most of these studies used intensive interventions. Estimates for the medium and low intensity subgroups were similar but confidence intervals were wide. Two studies in which some of the 4487 participants already had smoking experience at baseline did not detect evidence of effect (RR 1.04, 95% CI 0.93 to 1.17). Eight RCTs compared a combined family plus school intervention to a school intervention only. Of the three studies with data, two RCTS with outcomes for 2301 baseline never smokers detected evidence of an effect (RR 0.85, 95% CI 0.75 to 0.96) and one study with data for 1096 participants not restricted to never users at baseline also detected a benefit (RR 0.60, 95% CI 0.38 to 0.94). The other five studies with about 18,500 participants did not report data in a format allowing meta-analysis. One RCT also compared a family intervention to a school 'good behaviour' intervention and did not detect a difference between the two types of programme (RR 1.05, 95% CI 0.80 to 1.38, n = 388). No studies identified any adverse effects of intervention. Authors' conclusions There is moderate quality evidence to suggest that family-based interventions can have a positive effect on preventing children and adolescents from starting to smoke. There were more studies of high intensity programmes compared to a control group receiving no intervention, than there were for other compairsons. The evidence is therefore strongest for high intensity programmes used independently of school interventions. Programmes typically addressed family functioning, and were introduced when children were between 11 and 14 years old. Based on this moderate quality evidence a family intervention might reduce uptake or experimentation with smoking by between 16 and 32%. However, these findings should be interpreted cautiously because effect estimates could not include data from all studies. Our interpretation is that the common feature of the effective high intensity interventions was encouraging authoritative parenting (which is usually defined as showing strong interest in and care for the adolescent, often with rule setting). This is different from authoritarian parenting (do as I say) or neglectful or unsupervised parenting.
Resumo:
Role congruity theory predicts prejudice towards women who meet the agentic requirements of the leader role. In line with recent findings indicating greater acceptance of agentic behaviour from women, we find evidence for a more subtle form of prejudice towards women who fail to display agency in leader roles. Using a classic methodology, the agency of male and female leaders was manipulated using assertive or tentative speech, presented through written (Study 1, N = 167) or verbal (Study 2, N = 66) communications. Consistent with predictions, assertive women were as likeable and influential as assertive men, while being tentative in leadership reduced the likeability and influence of women, but not of men. Although approval of agentic behaviour from women in leadership reflects progress, evidence that women are quickly singled out for disapproval if they fail to show agency is important for understanding how they continue to be at a distinct disadvantage to men in leader roles.
Resumo:
A common debate has resurfaced over teacher quality and the quality of teacher education in Australia. This time it was started by a leaked draft report into teacher education from the Australian Institute of Teaching and School Leadership (AITSL) which the public can’t expect to see for at least another month. Media reports have all focused on one aspect of the wide-ranging report: teaching students' ATARs.
Resumo:
Terra Preta is a site-specific bio-energy project which aims to create a synergy between the public and the pre-existing engineered landscape of Freshkills Park on Staten Island, New York. The project challenges traditional paradigms of public space by proposing a dynamic and ever-changing landscape. The initiative allows the publuc to self-organise the landscape and to engage in 'algorithmic processes' of growth, harvest and space creation.