913 resultados para Fusion of label field
Resumo:
The North Atlantic Marine Boundary Layer Experiment (NAMBLEX), involving over 50 scientists from 12 institutions, took place at Mace Head, Ireland (53.32° N, 9.90° W), between 23 July and 4 September 2002. A wide range of state-of-the-art instrumentation enabled detailed measurements of the boundary layer structure and atmospheric composition in the gas and aerosol phase to be made, providing one of the most comprehensive in situ studies of the marine boundary layer to date. This overview paper describes the aims of the NAMBLEX project in the context of previous field campaigns in the Marine Boundary Layer (MBL), the overall layout of the site, a summary of the instrumentation deployed, the temporal coverage of the measurement data, and the numerical models used to interpret the field data. Measurements of some trace species were made for the first time during the campaign, which was characterised by predominantly clean air of marine origin, but more polluted air with higher levels of NOx originating from continental regions was also experienced. This paper provides a summary of the meteorological measurements and Planetary Boundary Layer (PBL) structure measurements, presents time series of some of the longer-lived trace species (O3, CO, H2, DMS, CH4, NMHC, NOx, NOy, PAN) and summarises measurements of other species that are described in more detail in other papers within this special issue, namely oxygenated VOCs, HCHO, peroxides, organo-halogenated species, a range of shorter lived halogen species (I2, OIO, IO, BrO), NO3 radicals, photolysis frequencies, the free radicals OH, HO2 and (HO2+Σ RO2), as well as a summary of the aerosol measurements. NAMBLEX was supported by measurements made in the vicinity of Mace Head using the NERC Dornier-228 aircraft. Using ECMWF wind-fields, calculations were made of the air-mass trajectories arriving at Mace Head during NAMBLEX, and were analysed together with both meteorological and trace-gas measurements. In this paper a chemical climatology for the duration of the campaign is presented to interpret the distribution of air-mass origins and emission sources, and to provide a convenient framework of air-mass classification that is used by other papers in this issue for the interpretation of observed variability in levels of trace gases and aerosols.
Resumo:
A first step in interpreting the wide variation in trace gas concentrations measured over time at a given site is to classify the data according to the prevailing weather conditions. In order to classify measurements made during two intensive field campaigns at Mace Head, on the west coast of Ireland, an objective method of assigning data to different weather types has been developed. Air-mass back trajectories calculated using winds from ECMWF analyses, arriving at the site in 1995–1997, were allocated to clusters based on a statistical analysis of the latitude, longitude and pressure of the trajectory at 12 h intervals over 5 days. The robustness of the analysis was assessed by using an ensemble of back trajectories calculated for four points around Mace Head. Separate analyses were made for each of the 3 years, and for four 3-month periods. The use of these clusters in classifying ground-based ozone measurements at Mace Head is described, including the need to exclude data which have been influenced by local perturbations to the regional flow pattern, for example, by sea breezes. Even with a limited data set, based on 2 months of intensive field measurements in 1996 and 1997, there are statistically significant differences in ozone concentrations in air from the different clusters. The limitations of this type of analysis for classification and interpretation of ground-based chemistry measurements are discussed.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
This paper presents a new image data fusion scheme by combining median filtering with self-organizing feature map (SOFM) neural networks. The scheme consists of three steps: (1) pre-processing of the images, where weighted median filtering removes part of the noise components corrupting the image, (2) pixel clustering for each image using self-organizing feature map neural networks, and (3) fusion of the images obtained in Step (2), which suppresses the residual noise components and thus further improves the image quality. It proves that such a three-step combination offers an impressive effectiveness and performance improvement, which is confirmed by simulations involving three image sensors (each of which has a different noise structure).
Resumo:
The 1:1 condensation of N-methyl-1,3-diaminopropane and N,N-diethyl-1,2-diminoethane with 2-acetylpyridine, respectively at high dilution gives the tridentate mono-condensed Schiff bases N-methyl-N'-(1-pyridin-2-yl-ethylidene)-propane-1,3-diamine (L-1) and N,N-diethyl-N'-(1-pyridin-2-yl-ethylidene)-ethane-1,2-diamine (L-2). The tridentate ligands were allowed to react with methanol solutions of nickel(II) thiocyanate to prepare the complexes [Ni(L-1)(SCN)(2)(OH2) (1) and [{Ni(L-2)(SCN)}(2)] (2). Single crystal X-ray diffraction was used to confirm the structures of the complexes. The nickel(II) in complex 1 is bonded to three nitrogen donor atoms of the ligand L-1 in a mer orientation, together with two thiocyanates bonded through nitrogen and a water molecule, and it is the first Schiff base complex of nickel(II) containing both thiocyanate and coordinated water. The coordinated water initiates a hydrogen bonded 2D network. In complex 2, the nickel ion occupies a slightly distorted octahedral coordination sphere, being bonded to three nitrogen atoms from the ligand L-2, also in a mer orientation, and two thiocyanate anions through nitrogen. In contrast to 1, the sixth coordination site is occupied by a sulfur atom from a thiocyanate anion in an adjacent molecule, thus creating a centrosymmetric dimer. A variable temperature magnetic study of complex 2 indicates the simultaneous presence of zero-field splitting, weak intramolecular ferromagnetic coupling and intermolecular antiferromagnetic interactions between the nickel(II) centers.
Resumo:
The Chinese medicinal plant Artemisia annua L. (Qinghao) is the only known source of the sesquiterpene artemisinin (Qinghaosu), which is used in the treatment of malaria. Artemisinin is a highly oxygenated sesquiterpene, containing a unique 1,2,4-trioxane ring structure, which is responsible for the antimalarial activity of this natural product. The phytochemistry of A. annua is dominated by both sesquiterpenoids and flavonoids, as is the case for many other plants in the Asteraceae family. However, A. annua is distinguished from the other members of the family both by the very large number of natural products which have been characterised to date (almost six hundred in total, including around fifty amorphane and cadinane sesquiterpenes), and by the highly oxygenated nature of many of the terpenoidal secondary metabolites. In addition, this species also contains an unusually large number of terpene allylic hydroperoxides and endoperoxides. This observation forms the basis of a proposal that the biogenesis of many of the highly oxygenated terpene metabolites from A. annua - including artemisinin itself may proceed by spontaneous oxidation reactions of terpene precursors, which involve these highly reactive allyllic hydroperoxides as intermediates. Although several studies of the biosynthesis of artemisinin have been reported in the literature from the 1980s and early 1990s, the collective results from these studies were rather confusing because they implied that an unfeasibly large number of different sesquiterpenes could all function as direct precursors to artemisinin (and some of the experiments also appeared to contradict one another). As a result, the complete biosynthetic pathway to artemisinin could not be stated conclusively at the time. Fortunately, studies which have been published in the last decade are now providing a clearer picture of the biosynthetic pathways in A. annua. By synthesising some of the sesquiterpene natural products which have been proposed as biogenetic precursors to artemisinin in such a way that they incorporate a stable isotopic label, and then feeding these precursors to intact A. annua plants, it has now been possible to demonstrate that dihydroartemisinic acid is a late-stage precursor to artemisinin and that the closely related secondary metabolite, artemisinic acid, is not (this approach differs from all the previous studies, which used radio-isotopically labelled precursors that were fed to a plant homogenate or a cell-free preparation). Quite remarkably, feeding experiments with labeled dihydroartemisinic acid and artemisinic acid have resulted in incorporation of label into roughly half of all the amorphane and cadinane sesquiterpenes which were already known from phytochemical studies of A. annua. These findings strongly support the hypothesis that many of the highly oxygenated sesquiterpenoids from this species arise by oxidation reactions involving allylic hydroperoxides, which seem to be such a defining feature of the chemistry of A. annua. In the particular case of artemisinin, these in vivo results are also supported by in vitro studies, demonstrating explicitly that the biosynthesis of artemisinin proceeds via the tertiary allylic hydroperoxide, which is derived from oxidation of dihydroartemisinic acid. There is some evidence that the autoxidation of dihydroartemisinic acid to this tertiary allylic hydroperoxide is a non-enzymatic process within the plant, requiring only the presence of light; and, furthermore, that the series of spontaneous rearrangement reactions which then convert thi allylic hydroperoxide to the 1,2,4-trioxane ring of artemisinin are also non-enzymatic in nature.
Resumo:
The first application of high field NMR spectroscopy (800 MHz for 1H observation) to human hepatic bile (as opposed to gall bladder bile) is reported. The bile sample used for detailed investigation was from a donor liver with mild fat infiltration, collected during organ retrieval prior to transplantation. In addition, to focus on the detection of bile acids in particular, a bile extract was analysed by 800 MHz 1H NMR spectroscopy, HPLC-NMR/MS and UPLC-MS. In the whole bile sample, 40 compounds have been assigned with the aid of two-dimensional 1H–1H TOCSY and 1H–13C HSQC spectra. These include phosphatidylcholine, 14 amino acids, 10 organic acids, 4 carbohydrates and polyols (glucose, glucuronate, glycerol and myo-inositol), choline, phosphocholine, betaine, trimethylamine-N-oxide and other small molecules. An initial NMR-based assessment of the concentration range of some key metabolites has been made. Some observed chemical shifts differ from expected database values, probably due to a difference in bulk diamagnetic susceptibility. The NMR spectra of the whole extract gave identification of the major bile acids (cholic, deoxycholic and chenodeoxycholic), but the glycine and taurine conjugates of a given bile acid could not be distinguished. However, this was achieved by HPLC-NMR/MS, which enabled the separation and identification of ten conjugated bile acids with relative abundances varying from approximately 0.1% (taurolithocholic acid) to 34.0% (glycocholic acid), of which, only the five most abundant acids could be detected by NMR, including the isomers glycodeoxycholic acid and glycochenodeoxycholic acid, which are difficult to distinguish by conventional LC-MS analysis. In a separate experiment, the use of UPLC-MS allowed the detection and identification of 13 bile acids. This work has shown the complementary potential of NMR spectroscopy, MS and hyphenated NMR/MS for elucidating the complex metabolic profile of human hepatic bile. This will be useful baseline information in ongoing studies of liver excretory function and organ transplantation.
Resumo:
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point and to the field covariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data.
Resumo:
Individual-level constructs are seldom taken into consideration in construction management research relating to project performance. This is antithetical to the objectives of properly conceptualizing and contextualizing the research we do because many project performance outcomes, such as the extent of cooperation and level of communication or teamwork are influenced and moderated by individuals’ perceptions, values and behaviour. A brief review of the literature in organizational studies centred on culture, identity, empowerment and trust is offered. These constructs are then explored in relation to project performance issues and outcomes, and it is noted that they are predominantly studied at the project and industry levels. We argue that focusing these constructs at the individual unit of analysis has significant implications for project performance and therefore their effects need to be systematically accounted for in explanations of the success and failure of projects. Far from being prescriptive, the aim is to generate interest and awareness for more focused research at the individual level of analysis in order to add new insights and perspectives to critical performance questions in construction management. To this end, a research agenda is outlined, arguing that construction management research integrating individual-level constructs and broader, macro-contextual issues will help define and enhance the legitimacy of the field.
Resumo:
We report a clear transition through a reconnection layer at the low-latitude magnetopause which shows a complete traversal across all reconnected field lines during northwestward interplanetary magnetic field (IMF) conditions. The associated plasma populations confirm details of the electron and ion mixing and the time history and acceleration through the current layer. This case has low magnetic shear with a strong guide field and the reconnection layer contains a single density depletion layer on the magnetosheath side which we suggest results from nearly field-aligned magnetosheath flows. Within the reconnection boundary layer, there are two plasma boundaries, close to the inferred separatrices on the magnetosphere and magnetosheath sides (Ssp and Ssh) and two boundaries associated with the Alfvén waves (or Rotational Discontinuities, RDsp and RDsh). The data are consistent with these being launched from the reconnection site and the plasma distributions are well ordered and suggestive of the time elapsed since reconnection of the field lines observed. In each sub-layer between the boundaries the plasma distribution is different and is centered around the current sheet, responsible for magnetosheath acceleration. We show evidence for a velocity dispersion effect in the electron anisotropy that is consistent with the time elapsed since reconnection. In addition, new evidence is presented for the occurrence of partial reflection of magnetosheath electrons at the magnetopause current layer.
Resumo:
Slapton Ley, a freshwater lake, located in south Devon (National Grid Reference SX 825 439), has been the focus of a wide range of research studies since the foundation of the Field Studies Council Centre in Slapton village in 1959, and the creation of the Slapton Ley Nature Reserve. Early concerns over eutrophication of the Lower Ley led to a range of studies focused on the impacts of land use change in the catchment, on nutrient delivery to the Ley, and on interpreting the impact of long-term nutrient enrichment of the Ley from palaeolimnological studies. What has been missing to date, however, is a focused study of the impacts of nutrient enrichment on the chemical and ecological structure and function of the combined Lower and Higher Ley systems. This paper attempts to draw together the various areas of study on the Ley to date in order to provide a review of current understanding of the limnology of Slapton Ley and to identify gaps in our knowledge. The past, present and future trophic status of the Ley is re-interpreted in the light of current understanding of the eutrophication process in the wider scientific community. Recommendations for future research are then made, with a view to the monitoring and management of Slapton Ley and its catchment.
Resumo:
The paper reviews the leading diagramming methods employed in system dynamics to communicate the contents of models. The main ideas and historical development of the field are first outlined. Two diagramming methods—causal loop diagrams (CLDs) and stock/flow diagrams (SFDs)—are then described and their advantages and limitations discussed. A set of broad research directions is then outlined. These concern: the abilities of different diagrams to communicate different ideas, the role that diagrams have in group model building, and the question of whether diagrams can be an adequate substitute for simulation modelling. The paper closes by suggesting that although diagrams alone are insufficient, they have many benefits. However, since these benefits have emerged only as ‘craft wisdom’, a more rigorous programme of research into the diagrams' respective attributes is called for.
Resumo:
This paper has two aims. First, to present cases in which scientists developed a defensive system for their homeland: Blackett and the air defense of Britain in WWII, Forrester and the SAGE system for North America in the Cold War, and Archimedes’ work defending Syracuse during the Second Punic War. In each case the historical context and the individual’s other achievements are outlined, and a description of the contribution’s relationship to OR/MS is given. The second aim is to consider some of the features the cases share and examine them in terms of contemporary OR/MS methodology. Particular reference is made to a recent analysis of the field’s strengths and weaknesses. This allows both a critical appraisal of the field and a set of potential responses for strengthening it. Although a mixed set of lessons arise, the overall conclusion is that the cases are examples to build on and that OR/MS retains the ability to do high stakes work.
Resumo:
A set of backbone modified peptides of general formula Boc-Xx-m-ABA-Yy-OMe where m-ABA is meta-aminobenzoic acid and Xx and Yy are natural amino acids such as Phe, Gly, Pro, Leu, Ile, Tyr and Trp etc., are found to self-assemble into soft nanovesicular structures in methanol-water solution (9:1 by v/v). At higher concentration the peptides generate larger vesicles which are formed through fusion of smaller vesicles. The formation of vesicles has been facilitated through the participation of various noncovalent interactions such as aromatic pi-stacking, hydrogen bonding and hydrophobic interactions. Model study indicates that the pi-stacking induced self-assembly, mediated by m-ABA is essential for well structured vesicles formation. The presence of conformationally rigid m-ABA in the backbone of the peptides also helps to form vesicular structures by restricting the conformational entropy. The vesicular structures get disrupted in presence of various salts such as KCl, CaCl(2), N(n-Bu)(4)Br and (NH(4))(2)SO(4) in methanol-water solution. Fluorescence microscopy and UV studies reveal that the soft nanovesicles encapsulate organic dye molecules such as Rhodamine B and Acridine Orange which could be released through salts induced disruption of vesicles.
Resumo:
This conference was an unusual and interesting event. Celebrating 25 years of Construction Management and Economics provides us with an opportunity to reflect on the research that has been reported over the years, to consider where we are now, and to think about the future of academic research in this area. Hence the sub-title of this conference: “past, present and future”. Looking through these papers, some things are clear. First, the range of topics considered interesting has expanded hugely since the journal was first published. Second, the research methods are also more diverse. Third, the involvement of wider groups of stakeholder is evident. There is a danger that this might lead to dilution of the field. But my instinct has always been to argue against the notion that Construction Management and Economics represents a discipline, as such. Granted, there are plenty of university departments around the world that would justify the idea of a discipline. But the vast majority of academic departments who contribute to the life of this journal carry different names to this. Indeed, the range and breadth of methodological approaches to the research reported in Construction Management and Economics indicates that there are several different academic disciplines being brought to bear on the construction sector. Some papers are based on economics, some on psychology and others on operational research, sociology, law, statistics, information technology, and so on. This is why I maintain that construction management is not an academic discipline, but a field of study to which a range of academic disciplines are applied. This may be why it is so interesting to be involved in this journal. The problems to which the papers are applied develop and grow. But the broad topics of the earliest papers in the journal are still relevant today. What has changed a lot is our interpretation of the problems that confront the construction sector all over the world, and the methodological approaches to resolving them. There is a constant difficulty in dealing with topics as inherently practical as these. While the demands of the academic world are driven by the need for the rigorous application of sound methods, the demands of the practical world are quite different. It can be difficult to meet the needs of both sets of stakeholders at the same time. However, increasing numbers of postgraduate courses in our area result in larger numbers of practitioners with a deeper appreciation of what research is all about, and how to interpret and apply the lessons from research. It also seems that there are contributions coming not just from construction-related university departments, but also from departments with identifiable methodological traditions of their own. I like to think that our authors can publish in journals beyond the construction-related areas, to disseminate their theoretical insights into other disciplines, and to contribute to the strength of this journal by citing our articles in more mono-disciplinary journals. This would contribute to the future of the journal in a very strong and developmental way. The greatest danger we face is in excessive self-citation, i.e. referring only to sources within the CM&E literature or, worse, referring only to other articles in the same journal. The only way to ensure a strong and influential position for journals and university departments like ours is to be sure that our work is informing other academic disciplines. This is what I would see as the future, our logical next step. If, as a community of researchers, we are not producing papers that challenge and inform the fundamentals of research methods and analytical processes, then no matter how practically relevant our output is to the industry, it will remain derivative and secondary, based on the methodological insights of others. The balancing act between methodological rigour and practical relevance is a difficult one, but not, of course, a balance that has to be struck in every single paper.