905 resultados para Large amounts


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The invasive thistle Carduus nutans has been reported to be allelopathic, yet no allelochemicals have been identified from the species. In a search for allelochemicals from C. nutans and the closely related invasive species C. acanthoides, bioassay-guided fractionation of roots and leaves of each species were conducted. Only dichloromethane extracts of the roots of both species contained a phytotoxin (aplotaxene, (Z,Z,Z)-heptadeca-1,8,11,14-tetraene) with sufficient total activity to potentially act as an allelochemical. Aplotaxene made up 0.44 % of the weight of greenhouse-grown C. acanthoides roots (ca. 20 mM in the plant) and was not found in leaves of either species. It inhibited growth of lettuce 50%(I-50) in soil at a concentration of ca. 0.5 mg g(-1) of dry soil (ca. 6.5 mM in soil moisture). These values gave a total activity in soil value (molar concentration in the plant divided by the molarity required for 50 % growth inhibition in soil = 3.08) similar to those of some established allelochemicals. The aplotaxene I-50 for duckweed (Lemna paucicostata) in nutrient solution was less than 0.333 mM, and the compound caused cellular leakage of cucumber cotyledon discs in darkness and light at similar concentrations. Soil in which C. acanthoides had grown contained aplotaxene at a lower concentration than necessary for biological activity in our short-term soil bioassays, but these levels might have activity over longer periods of time and might be an underestimate of concentrations in undisturbed and/or rhizosphere soil.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The 510 million year old Kalkarindji Large Igneous Province correlates in time with the first major extinction event after the Cambrian explosion of life. Large igneous provinces correlate with all major mass extinction events in the last 500 million years. The genetic link between large igneous provinces and mass extinction remains unclear. My work is a contribution towards understanding magmatic processes involved in the generation of Large Igneous Provinces. I concentrate on the origin of variation in Cr in magmas and have developed a model in which high temperature melts intrude into and assimilate large amounts of upper continental crust.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Lentiviral vectors pseudotyped with vesicular stomatitis virus glycoprotein (VSV-G) are emerging as the vectors of choice for in vitro and in vivo gene therapy studies. However, the current method for harvesting lentivectors relies upon ultracentrifugation at 50 000 g for 2 h. At this ultra-high speed, rotors currently in use generally have small volume capacity. Therefore, preparations of large volumes of high-titre vectors are time-consuming and laborious to perform. In the present study, viral vector supernatant harvests from vector-producing cells (VPCs) were pre-treated with various amounts of poly-L-lysine (PLL) and concentrated by low speed centrifugation. Optimal conditions were established when 0.005% of PLL (w/v) was added to vector supernatant harvests, followed by incubation for 30 min and centrifugation at 10 000 g for 2 h at 4 degreesC. Direct comparison with ultracentrifugation demonstrated that the new method consistently produced larger volumes (6 ml) of high-titre viral vector at 1 x 10(8) transduction unit (TU)/ml (from about 3000 ml of supernatant) in one round of concentration. Electron microscopic analysis showed that PLL/viral vector formed complexes, which probably facilitated easy precipitation at low-speed concentration (10 000 g), a speed which does not usually precipitate viral particles efficiently. Transfection of several cell lines in vitro and transduction in vivo in the liver with the lentivector/PLL complexes demonstrated efficient gene transfer without any significant signs of toxicity. These results suggest that the new method provides a convenient means for harvesting large volumes of high-titre lentivectors, facilitate gene therapy experiments in large animal or human gene therapy trials, in which large amounts of lentiviral vectors are a prerequisite.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background: Infection remains a severe complication following a total hip replacement. If infection is suspected when revision surgery is being performed, additional gentamicin is often added to the cement on an ad hoc basis in an attempt to reduce the risk of recurrent infection.

Methods and results: In this in vitro study, we determined the effect of incorporating additional gentamicin on the mechanical properties of cement. We also determined the degree of gentamicin release from cement, and also the extent to which biofilms of clinical Staphylococcus spp. isolates form on cement in vitro. When gentamicin was added to unloaded cement (1–4 g), there was a significant reduction in the mechanical performance of the loaded cements compared to unloaded cement. A significant increase in gentamicin release from the cement over 72 h was apparent, with the amount of gentamicin released increasing significantly with each additional 1 g of gentamicin added. When overt infection was modeled, the incorporation of additional gentamicin did result in an initial reduction in bacterial colonization, but this beneficial effect was no longer apparent by 72 h, with the clinical strains forming biofilms on the cements despite the release of high levels of gentamicin.

Interpretation: Our findings indicate that the addition of large amounts of gentamicin to cement is unlikely to eradicate bacteria present as a result of an overt infection of an existing implant, and could result in failure of the prosthetic joint because of a reduction in mechanical performance of the bone cement.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many real-world optimization problems contain multiple (often conflicting) goals to be optimized concurrently, commonly referred to as multi-objective problems (MOPs). Over the past few decades, a plethora of multi-objective algorithms have been proposed, often tested on MOPs possessing two or three objectives. Unfortunately, when tasked with solving MOPs with four or more objectives, referred to as many-objective problems (MaOPs), a large majority of optimizers experience significant performance degradation. The downfall of these optimizers is that simultaneously maintaining a well-spread set of solutions along with appropriate selection pressure to converge becomes difficult as the number of objectives increase. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs possessing large amounts of decision variables. In this thesis, we explore the challenges of many-objective optimization and propose three new promising algorithms designed to efficiently solve MaOPs. Experimental results demonstrate the proposed optimizers to perform very well, often outperforming state-of-the-art many-objective algorithms.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We describe a novel mechanism that can significantly lower the amplitude of the climatic response to certain large volcanic eruptions and examine its impact with a coupled ocean-atmosphere climate model. If sufficiently large amounts of water vapour enter the stratosphere, a climatically significant amount of water vapour can be left over in the lower stratosphere after the eruption, even after sulphate aerosol formation. This excess stratospheric humidity warms the tropospheric climate, and acts to balance the climatic cooling induced by the volcanic aerosol, especially because the humidity anomaly lasts for a period that is longer than the residence time of aerosol in the stratosphere. In particular, northern hemisphere high latitude cooling is reduced in magnitude. We discuss this mechanism in the context of the discrepancy between the observed and modelled cooling following the Krakatau eruption in 1883. We hypothesize that moist coignimbrite plumes caused by pyroclastic flows travelling over ocean rather than land, resulting from an eruption close enough to the ocean, might provide the additional source of stratospheric water vapour.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Resolving the relationships between Metazoa and other eukaryotic groups as well as between metazoan phyla is central to the understanding of the origin and evolution of animals. The current view is based on limited data sets, either a single gene with many species (e.g., ribosomal RNA) or many genes but with only a few species. Because a reliable phylogenetic inference simultaneously requires numerous genes and numerous species, we assembled a very large data set containing 129 orthologous proteins (similar to30,000 aligned amino acid positions) for 36 eukaryotic species. Included in the alignments are data from the choanoflagellate Monosiga ovata, obtained through the sequencing of about 1,000 cDNAs. We provide conclusive support for choanoflagellates as the closest relative of animals and for fungi as the second closest. The monophyly of Plantae and chromalveolates was recovered but without strong statistical support. Within animals, in contrast to the monophyly of Coelomata observed in several recent large-scale analyses, we recovered a paraphyletic Coelamata, with nematodes and platyhelminths nested within. To include a diverse sample of organisms, data from EST projects were used for several species, resulting in a large amount of missing data in our alignment (about 25%). By using different approaches, we verify that the inferred phylogeny is not sensitive to these missing data. Therefore, this large data set provides a reliable phylogenetic framework for studying eukaryotic and animal evolution and will be easily extendable when large amounts of sequence information become available from a broader taxonomic range.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

One of the fundamental machine learning tasks is that of predictive classification. Given that organisations collect an ever increasing amount of data, predictive classification methods must be able to effectively and efficiently handle large amounts of data. However, it is understood that present requirements push existing algorithms to, and sometimes beyond, their limits since many classification prediction algorithms were designed when currently common data set sizes were beyond imagination. This has led to a significant amount of research into ways of making classification learning algorithms more effective and efficient. Although substantial progress has been made, a number of key questions have not been answered. This dissertation investigates two of these key questions. The first is whether different types of algorithms to those currently employed are required when using large data sets. This is answered by analysis of the way in which the bias plus variance decomposition of predictive classification error changes as training set size is increased. Experiments find that larger training sets require different types of algorithms to those currently used. Some insight into the characteristics of suitable algorithms is provided, and this may provide some direction for the development of future classification prediction algorithms which are specifically designed for use with large data sets. The second question investigated is that of the role of sampling in machine learning with large data sets. Sampling has long been used as a means of avoiding the need to scale up algorithms to suit the size of the data set by scaling down the size of the data sets to suit the algorithm. However, the costs of performing sampling have not been widely explored. Two popular sampling methods are compared with learning from all available data in terms of predictive accuracy, model complexity, and execution time. The comparison shows that sub-sampling generally products models with accuracy close to, and sometimes greater than, that obtainable from learning with all available data. This result suggests that it may be possible to develop algorithms that take advantage of the sub-sampling methodology to reduce the time required to infer a model while sacrificing little if any accuracy. Methods of improving effective and efficient learning via sampling are also investigated, and now sampling methodologies proposed. These methodologies include using a varying-proportion of instances to determine the next inference step and using a statistical calculation at each inference step to determine sufficient sample size. Experiments show that using a statistical calculation of sample size can not only substantially reduce execution time but can do so with only a small loss, and occasional gain, in accuracy. One of the common uses of sampling is in the construction of learning curves. Learning curves are often used to attempt to determine the optimal training size which will maximally reduce execution time while nut being detrimental to accuracy. An analysis of the performance of methods for detection of convergence of learning curves is performed, with the focus of the analysis on methods that calculate the gradient, of the tangent to the curve. Given that such methods can be susceptible to local accuracy plateaus, an investigation into the frequency of local plateaus is also performed. It is shown that local accuracy plateaus are a common occurrence, and that ensuring a small loss of accuracy often results in greater computational cost than learning from all available data. These results cast doubt over the applicability of gradient of tangent methods for detecting convergence, and of the viability of learning curves for reducing execution time in general.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In contrast to leukocytosis, paraneoplastic hypereosinophilia is uncommon in lung cancer. We present a patient with large-cell carcinoma of the lung, in which cancer cells generate large amounts of GM-CSF leading to a leukemoid reaction with prominent hypereosinophilia and potentially involved in autocrine tumor stimulation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nitric Oxide (NO) plays a controversial role in the pathophysiology of sepsis and septic shock. Its vasodilatory effects are well known, but it also has pro- and antiinflammatory properties, assumes crucial importance in antimicrobial host defense, may act as an oxidant as well as an antioxidant, and is said to be a vital poison for the immune and inflammatory network. Large amounts of NO and peroxynitrite are responsible for hypotension, vasoplegia, cellular suffocation, apoptosis, lactic acidosis, and ultimately multiorgan failure. Therefore, NO synthase (NOS) inhibitors were developed to reverse the deleterious effects of NO. Studies using these compounds have not met with uniform success however, and a trial using the nonselective NOS inhibitor N-G-methyl-L-arginine hydrochloride was terminated prematurely because of increased mortality in the treatment arm despite improved shock resolution. Thus, the issue of NOS inhibition in sepsis remains a matter of debate. Several publications have emphasized the differences concerning clinical applicability of data obtained from unresuscitated, hypodynamic rodent models using a pretreatment approach versus resuscitated, hyperdynamic models in high-order species using posttreatment approaches. Therefore, the present review focuses on clinically relevant large-animal studies of endotoxin or living bacteria-induced, hyperdynamic models of sepsis that integrate standard day-today care resuscitative measures.