905 resultados para criterion variables
Resumo:
Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships with four modeling methods run with multiple scenarios of (1) sources of occurrences and geographically isolated background ranges for absences, (2) approaches to drawing background (absence) points, and (3) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved by using a global dataset for model training, rather than restricting data input to the species’ native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e. into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g. boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post-hoc test conducted on a new Partenium dataset from Nepal validated excellent predictive performance of our “best” model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for Parthenium hysterophorus L. (Asteraceae; parthenium). However, discrepancies between model predictions and parthenium invasion in Australia indicate successful management for this globally significant weed. This article is protected by copyright. All rights reserved.
Resumo:
Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships with four modeling methods run with multiple scenarios of (1) sources of occurrences and geographically isolated background ranges for absences, (2) approaches to drawing background (absence) points, and (3) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved by using a global dataset for model training, rather than restricting data input to the species’ native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e. into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g. boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post-hoc test conducted on a new Partenium dataset from Nepal validated excellent predictive performance of our “best” model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for Parthenium hysterophorus L. (Asteraceae; parthenium). However, discrepancies between model predictions and parthenium invasion in Australia indicate successful management for this globally significant weed. This article is protected by copyright. All rights reserved.
Resumo:
Bactrocera frauenfeldi (Schiner), the ‘mango fruit fly’, is a horticultural pest originating from the Papua New Guinea region. It was first detected in Australia on Cape York Peninsula in north Queensland in 1974 and had spread to Cairns by 1994 and Townsville by 1997. Bactrocera frauenfeldi has not been recorded further south since then despite its invasive potential, an absence of any controls and an abundance of hosts in southern areas. Analysis of cue-lure trapping data from 1997 to 2012 in relation to environmental variables shows that the distribution of B. frauenfeldi in Queensland correlates to locations with a minimum temperature for the coldest month >13.2°C, annual temperature range <19.3°C, mean temperature of the driest quarter >20.2°C, precipitation of the wettest month >268 mm, precipitation of the wettest quarter >697 mm, temperature seasonality <30.9°C (i.e. lower temperature variability) and areas with higher human population per square kilometre. Annual temperature range was the most important variable in predicting this species' distribution. Predictive distribution maps based on an uncorrelated subset of these variables reasonably reflected the current distribution of this species in northern Australia and predicted other areas in the world potentially at risk from invasion by this species. This analysis shows that the distribution of B. frauenfeldi in Australia is correlated to certain environmental variables that have most likely limited this species' spread southward in Queensland. This is of importance to Australian horticulture in demonstrating that B. frauenfeldi is unlikely to establish in horticultural production areas further south than Townsville.
Resumo:
Improved sufficient conditions are derived for the exponential stability of a nonlinear time varying feedback system having a time invariant blockG in the forward path and a nonlinear time varying gain ϕ(.)k(t) in the feedback path. φ(.) being an odd monotone nondecreasing function. The resulting bound on is less restrictive than earlier criteria.
Resumo:
High Intensity Exercise (HIE) stimulates greater physiological remodeling when compared to workload matched low-moderate intensity exercise. This study utilized an untargeted metabolomics approach to examine the metabolic perturbations that occur following two workload matched supramaximal low volume HIE trials. In a randomized order, 7 untrained males completed two exercise protocols separated by one week; 1) HIE150%: 30 x 20s cycling at 150% VO2peak, 40s passive rest; 2) HIE300%: 30 x 10s cycling at 300% VO2peak, 50 s passive rest. Total exercise duration was 30 minutes for both trials. Blood samples were taken at rest, during and immediately following exercise and at 60 minutes post exercise. Gas chromatography-mass spectrometry (GC-MS) analysis of plasma identified 43 known metabolites of which 3 demonstrated significant fold changes (HIE300% compared to the HIE150% value) during exercise, 14 post exercise and 23 at the end of the recovery period. Significant changes in plasma metabolites relating to lipid metabolism [fatty acids: dodecanoate (p=0.042), hexadecanoate (p=0.001), octadecanoate (p=0.001)], total cholesterol (p=0.001), and glycolysis [lactate (p=0.018)] were observed following exercise and during the recovery period. The HIE300% protocol elicited greater metabolic changes relating to lipid metabolism and glycolysis when compared to HIE150% protocol. These changes were more pronounced throughout the recovery period rather than during the exercise bout itself. Data from the current study demonstrate the use of metabolomics to monitor intensity-dependent changes in multiple metabolic pathways following exercise. The small sample size indicates a need for further studies in a larger sample cohort to validate these findings.
Resumo:
We propose a novel technique for robust voiced/unvoiced segment detection in noisy speech, based on local polynomial regression. The local polynomial model is well-suited for voiced segments in speech. The unvoiced segments are noise-like and do not exhibit any smooth structure. This property of smoothness is used for devising a new metric called the variance ratio metric, which, after thresholding, indicates the voiced/unvoiced boundaries with 75% accuracy for 0dB global signal-to-noise ratio (SNR). A novelty of our algorithm is that it processes the signal continuously, sample-by-sample rather than frame-by-frame. Simulation results on TIMIT speech database (downsampled to 8kHz) for various SNRs are presented to illustrate the performance of the new algorithm. Results indicate that the algorithm is robust even in high noise levels.
Resumo:
The problem of optimum design of a Lanchester damper for minimum force transmission from a viscously damped single degree of freedom system subjected to harmonic excitation is investigated. Explicit expressions are developed for determining the optimum absorber parameters. It is shown that for the particular case of the undamped single degree of freedom system the results reduce to the classical ones obtained by using the concept of a fixed point on the transmissibility curves.
Resumo:
Improved sufficient conditions are derived for the exponential stability of a nonlinear time varying feedback system having a time invariant blockG in the forward path and a nonlinear time varying gain ϕ(.)k(t) in the feedback path. φ(.) being an odd monotone nondecreasing function. The resulting bound on $$\left( {{{\frac{{dk}}{{dt}}} \mathord{\left/ {\vphantom {{\frac{{dk}}{{dt}}} k}} \right. \kern-\nulldelimiterspace} k}} \right)$$ is less restrictive than earlier criteria.
Resumo:
The density of states n(E) is calculated for a bound system whose classical motion is integrable, starting from an expression in terms of the trace of the time-dependent Green function. The novel feature is the use of action-angle variables. This has the advantages that the trace operation reduces to a trivial multiplication and the dependence of n(E) on all classical closed orbits with different topologies appears naturally. The method is contrasted with another, not applicable to integrable systems except in special cases, in which quantization arises from a single closed orbit which is assumed isolated and the trace taken by the method of stationary phase.
Resumo:
The rapid increase in genome sequence information has necessitated the annotation of their functional elements, particularly those occurring in the non-coding regions, in the genomic context. Promoter region is the key regulatory region, which enables the gene to be transcribed or repressed, but it is difficult to determine experimentally. Hence an in silico identification of promoters is crucial in order to guide experimental work and to pin point the key region that controls the transcription initiation of a gene. In this analysis, we demonstrate that while the promoter regions are in general less stable than the flanking regions, their average free energy varies depending on the GC composition of the flanking genomic sequence. We have therefore obtained a set of free energy threshold values, for genomic DNA with varying GC content and used them as generic criteria for predicting promoter regions in several microbial genomes, using an in-house developed tool `PromPredict'. On applying it to predict promoter regions corresponding to the 1144 and 612 experimentally validated TSSs in E. coli (50.8% GC) and B. subtilis (43.5% GC) sensitivity of 99% and 95% and precision values of 58% and 60%, respectively, were achieved. For the limited data set of 81 TSSs available for M. tuberculosis (65.6% GC) a sensitivity of 100% and precision of 49% was obtained.
Resumo:
Background: Patients may need massive volume-replacement therapy after cardiac surgery because of large fluid transfer perioperatively, and the use of cardiopulmonary bypass. Hemodynamic stability is better maintained with colloids than crystalloids but colloids have more adverse effects such as coagulation disturbances and impairment of renal function than do crystalloids. The present study examined the effects of modern hydroxyethyl starch (HES) and gelatin solutions on blood coagulation and hemodynamics. The mechanism by which colloids disturb blood coagulation was investigated by thromboelastometry (TEM) after cardiac surgery and in vitro by use of experimental hemodilution. Materials and methods: Ninety patients scheduled for elective primary cardiac surgery (Studies I, II, IV, V), and twelve healthy volunteers (Study III) were included in this study. After admission to the cardiac surgical intensive care unit (ICU), patients were randomized to receive different doses of HES 130/0.4, HES 200/0.5, or 4% albumin solutions. Ringer’s acetate or albumin solutions served as controls. Coagulation was assessed by TEM, and hemodynamic measurements were based on thermodilutionally measured cardiac index (CI). Results: HES and gelatin solutions impaired whole blood coagulation similarly as measured by TEM even at a small dose of 7 mL/kg. These solutions reduced clot strength and prolonged clot formation time. These effects were more pronounced with increasing doses of colloids. Neither albumin nor Ringer’s acetate solution disturbed blood coagulation significantly. Coagulation disturbances after infusion of HES or gelatin solutions were clinically slight, and postoperative blood loss was comparable with that of Ringer’s acetate or albumin solutions. Both single and multiple doses of all the colloids increased CI postoperatively, and this effect was dose-dependent. Ringer’s acetate had no effect on CI. At a small dose (7 mL/kg), the effect of gelatin on CI was comparable with that of Ringer’s acetate and significantly less than that of HES 130/0.4 (Study V). However, when the dose was increased to 14 and 21 mL/kg, the hemodynamic effect of gelatin rose and became comparable with that of HES 130/0.4. Conclusions: After cardiac surgery, HES and gelatin solutions impaired clot strength in a dose-dependent manner. The potential mechanisms were interaction with fibrinogen and fibrin formation, resulting in decreased clot strength, and hemodilution. Although the use of HES and gelatin inhibited coagulation, postoperative bleeding on the first postoperative morning in all the study groups was similar. A single dose of HES solutions improved CI postoperatively more than did gelatin, albumin, or Ringer’s acetate. However, when administered in a repeated fashion, (cumulative dose of 14 mL/kg or more), no differences were evident between HES 130/0.4 and gelatin.
Resumo:
The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.
Resumo:
Owing to widespread applications, synthesis and characterization of silver nanoparticles is recently attracting considerable attention. Increasing environmental concerns over chemical synthesis routes have resulted in attempts to develop biomimetic approaches. One of them is synthesis using plant parts, which eliminates the elaborate process of maintaining the microbial culture and often found to be kinetically favourable than other bioprocesses. The present study deals with investigating the effect of process variables like reductant concentrations, reaction pH, mixing ratio of the reactants and interaction time on the morphology and size of silver nanoparticles synthesized using aqueous extract of Azadirachta indica (Neem) leaves. The formation of crystalline silver nanoparticles was confirmed using X-ray diffraction analysis. By means of UV spectroscopy, Scanning and Transmission Electron Microscopy techniques, it was observed that the morphology and size of the nanoparticles were strongly dependent on the process parameters. Within 4 h interaction period, nanoparticles below 20-nm-size with nearly spherical shape were produced. On increasing interaction time (ageing) to 66 days, both aggregation and shape anisotropy (ellipsoidal, polyhedral and capsular) of the particles increased. In alkaline pH range, the stability of cluster distribution increased with a declined tendency for aggregation of the particles. It can be inferred from the study that fine tuning the bioprocess parameters will enhance possibilities of desired nano-product tailor made for particular applications.
Resumo:
This work addresses the optimum design of a composite box-beam structure subject to strength constraints. Such box-beams are used as the main load carrying members of helicopter rotor blades. A computationally efficient analytical model for box-beam is used. Optimal ply orientation angles are sought which maximize the failure margins with respect to the applied loading. The Tsai-Wu-Hahn failure criterion is used to calculate the reserve factor for each wall and ply and the minimum reserve factor is maximized. Ply angles are used as design variables and various cases of initial starting design and loadings are investigated. Both gradient-based and particle swarm optimization (PSO) methods are used. It is found that the optimization approach leads to the design of a box-beam with greatly improved reserve factors which can be useful for helicopter rotor structures. While the PSO yields globally best designs, the gradient-based method can also be used with appropriate starting designs to obtain useful designs efficiently. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Guo and Nixon proposed a feature selection method based on maximizing I(x; Y),the multidimensional mutual information between feature vector x and class variable Y. Because computing I(x; Y) can be difficult in practice, Guo and Nixon proposed an approximation of I(x; Y) as the criterion for feature selection. We show that Guo and Nixon's criterion originates from approximating the joint probability distributions in I(x; Y) by second-order product distributions. We remark on the limitations of the approximation and discuss computationally attractive alternatives to compute I(x; Y).