892 resultados para EXPLOITING MULTICOMMUTATION
Resumo:
This unique book is the first of its kind to explore the diversity of interactions between insects and birds. A group of international experts enthusiastically agreed to contribute to the four sections of the book following the success of an Entomological Club Conference on Insect and Bird Interactions. The first section covers population management issues, discussing effects on birds highly relevant to the planting of large areas of GM crops, new opportunities for increasing biodiversity in farming landscapes, and the novel aspect of managing insects by exploiting birds as biological control agents. This is followed by a section discussing the effects of insecticides on bird populations, and includes a contribution from the RSPB, as well as a re-appraisal of the effects of DDT on raptors. Next, the foraging behaviour of birds on insects is discussed, with chapters also on 'warning' coloration in insects and learning by birds. The first chapter in this section is unusual in having been written by an ophthalmologist and covers colour vision in birds, more specifically ultraviolet vision in relation to insect coloration. Finally, the authors look at insects that are parasites of birds or feed on the detritus in nests, and review the ecology and evolution of the co-adaptation of insect ectoparasites with birds. Insect and Bird Interactions is unparalleled in scope and coverage and will be of interest to entomologists, ornithologists, and ecologists alike.
Resumo:
A cylinder forming poly(styrene-b-butadiene-b-styrene) triblock copolymer melt is cyclically processed through a capillary at a high shear rate in the Cambridge Multipass Rheometer (MPR). In situ X-ray diffraction experiments enable observation of the effect of the shear on the block copolymer (BCP) nanophase orientation, both during and after processing. Temporal resolution of the X-ray exposures is increased, whilst retaining intensity, by exploiting the cyclical nature of the shear and the material's response to it; short exposures from many cycles, individually having few counts, are added together to produce well resolved X-ray patterns. Orientation of the cylinders reduces during processing, then increases during pauses between processing. The loss of orientation is attributed to the high shear rate deforming the melt faster than the structure can respond, whilst it is believed that melt relaxation, linked to the compressibility of the material, produces much lower shear rates after mechanical processing has ceased, which induces strong orientation of the nanostructure.
Resumo:
[Ru(2,2'-bipyridine)(2)(Hdpa)](BF4)(2) center dot 2H(2)O (1), [Ru(1,10-phenanthroline)(2)(Hdpa)] (PF6)(2) center dot CH2Cl2 (2) and [Ru(4,4,4',4'-tetramethyl-2,2'- bisoxazoline)(2)(Hdpa)] (PF6)(2) (3) are synthesized where Hdpa is 2,2'-dipyridylamine. The X-ray crystal structures of 1 and 2 have been determined. Hdpa in 1 and 2 is found to bind the metal via the two pyridyl N ends. Comparing the NMR spectra in DMSO-d(6), it is concluded that 3 has a similar structure. The pK(a) values (for the dissociation of the NH proton in Hdpa) of free Hdpa and its complexes are determined in acetonitrile by exploiting molar conductance. These correlate linearly with the chemical shift of the NH proton in the respective entities. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
We describe experiments designed to explore the possibility of using amyloid fibrils as new nanoscale biomaterials for promoting and exploiting cell adhesion, migration and differentiation in vitro. We created peptides that add the biological cell adhesion sequence (RGD) or a control sequence (RAD) to the C-terminus of an 11-residue peptide corresponding to residues 105-115 of the amyloidogenic protein transthyretin. These peptides readily self-assemble in aqueous solution to form amyloid fibrils, and X-ray fibre diffraction shows that they possess the same strand and sheet spacing in the characteristic cross-beta structure as do fibrils formed by the parent peptide. We report that the fibrils containing the RGD sequence are bioactive and that these fibrils interact specifically with cells via the RGD group displayed on the fibril surface. As the design of such functionalized fibrils can be systematically altered, these findings suggest that it will be possible to generate nanomaterials based on amyloid fibrils that are tailored to promote interactions with a wide variety of cell types. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This article considers how visual practices are used to manage knowledge in project-based work. It compares project-based work in a capital goods manufacturer and an architectural firm. Visual representations are used extensively in both cases, but the nature of visual practice differs significantly between the two. The research explores the kinds of knowledge that are (and aren't) developed and made visible in strategizing and planning activities. For example, whereas the emphasis of project-based work in the former firm is on exploitation of knowledge and it visualizes its project context largely in commercial and processual terms, the emphasis in the latter is on exploration and it uses a wide range of visual materials to understand physical interdependencies across the project boundary. We contend particular kinds of visual tools can help project teams step between exploration and exploitation within a project, and articulate the types of representations, foci of attention and patterns of interaction involved. The findings suggest that business managers can make more deliberate choices about how knowledge is made visible, and can change visual practice to align the project with exploring and exploiting opportunities. It raises the question: What don't you see within your organization? The work contributes to academic debates about managing through projects, strategising and organizing, while the focus on visual representation disrupts the tacit-codified dichotomy in the broad debate on knowledge and learning, and highlights the craft skills central to strategizing and organizing.
Resumo:
We describe a FORTRAN-90 program that computes scattering t-matrices for a molecule. These can be used in a Low-Energy Electron Diffraction program to solve the molecular structural problem very efficiently. The intramolecular multiple scattering is computed within a Dyson-like approach, using free space Green propagators in a basis of spherical waves. The advantage of this approach is related to exploiting the chemical identity of the molecule, and to the simplicity to translate and rotate these t-matrices without performing a new multiple-scattering calculation for each configuration. FORTRAN-90 routines for rotating the resulting t-matrices using Wigner matrices are also provided.
Resumo:
Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.
Resumo:
Purpose – Expectations of future market conditions are acknowledged to be crucial for the development decision and hence for shaping the built environment. The purpose of this paper is to study the central London office market from 1987 to 2009 and test for evidence of rational, adaptive and naive expectations. Design/methodology/approach – Two parallel approaches are applied to test for either rational or adaptive/naive expectations: vector auto-regressive (VAR) approach with Granger causality tests and recursive OLS regression with one-step forecasts. Findings – Applying VAR models and a recursive OLS regression with one-step forecasts, the authors do not find evidence of adaptive and naïve expectations of developers. Although the magnitude of the errors and the length of time lags between market signal and construction starts vary over time and development cycles, the results confirm that developer decisions are explained, to a large extent, by contemporaneous and historic conditions in both the City and the West End, but this is more likely to stem from the lengthy design, financing and planning permission processes rather than adaptive or naive expectations. Research limitations/implications – More generally, the results of this study suggest that real estate cycles are largely generated endogenously rather than being the result of large demand shocks and/or irrational behaviour. Practical implications – Developers may be able to generate excess profits by exploiting market inefficiencies but this may be hindered in practice by the long periods necessary for planning and construction of the asset. Originality/value – This paper focuses the scholarly debate of real estate cycles on the role of expectations. It is also one of very few spatially disaggregate studies of the subject matter.
Resumo:
Almost all research fields in geosciences use numerical models and observations and combine these using data-assimilation techniques. With ever-increasing resolution and complexity, the numerical models tend to be highly nonlinear and also observations become more complicated and their relation to the models more nonlinear. Standard data-assimilation techniques like (ensemble) Kalman filters and variational methods like 4D-Var rely on linearizations and are likely to fail in one way or another. Nonlinear data-assimilation techniques are available, but are only efficient for small-dimensional problems, hampered by the so-called ‘curse of dimensionality’. Here we present a fully nonlinear particle filter that can be applied to higher dimensional problems by exploiting the freedom of the proposal density inherent in particle filtering. The method is illustrated for the three-dimensional Lorenz model using three particles and the much more complex 40-dimensional Lorenz model using 20 particles. By also applying the method to the 1000-dimensional Lorenz model, again using only 20 particles, we demonstrate the strong scale-invariance of the method, leading to the optimistic conjecture that the method is applicable to realistic geophysical problems. Copyright c 2010 Royal Meteorological Society
Resumo:
In this important article Richard Hoyle, one of the country’s leading historians of the early modern period, introduces new perspectives on the Land Tax and its use in the analysis of local communities in the late eighteenth and early nineteenth centuries. He uses as his case study the parish of Earls Colne in Essex, on which he has already written extensively with Professor Henry French. The article begins with an overview of the tax itself, explaining its history and the procedures for the collection of revenues – including the numerous changes which took place. The sizeable problems confronting any would-be analyst of the data are clearly identified, and Hoyle observes that because of these apparently insoluble difficulties the potential of the tax returns has never been fully realised. He then considers the surviving documentation in The National Archives, providing an accessible introduction to the sources and their arrangement, and describing the particularly important question o f the redemption of the tax by payment of a lump sum. The extent of redemption (in the years around 1800-1804) is discussed. Hoyle draws attention to the potential for linking the tax returns themselves with the redemption certificates (which have never been subjected to historical analysis and thereby proposes new ways of exploiting the evidence of the taxation as a whole. The article then discusses in detail the specific case of Earls Colne, with tabulated data showing the research potential. Topics analysed include the ownership of property ranked by size of payment, and calculations whereby the amount paid may be used to determine the worth of land and the structure of individual estates. The important question of absentee owners is investigated, and there is a very valuable consideration of the potential for looking at portfolio estate ownership, whereby owners held land in varying proportions in a number of parishes. It is suggested that such studies will allow us to be more aware of the entirety of property ownership, which a focus on a single community does not permit. In the concluding paragraph it is argued that using these sources we may see the rise and fall of estates, gain new information on landownership, landholding and farm size, and even approach the challenging topic of the distribution of wealth.
Resumo:
Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.
Resumo:
Expectations of future market conditions are generally acknowledged to be crucial for the development decision and hence for shaping the built environment. This empirical study of the Central London office market from 1987 to 2009 tests for evidence of adaptive and naive expectations. Applying VAR models and a recursive OLS regression with one-step forecasts, we find evidence of adaptive and naïve, rather than rational expectations of developers. Although the magnitude of the errors and the length of time lags vary over time and development cycles, the results confirm that developers’ decisions are explained to a large extent by contemporaneous and past conditions in both London submarkets. The corollary of this finding is that developers may be able to generate excess profits by exploiting market inefficiencies but this may be hindered in practice by the long periods necessary for planning and construction of the asset. More generally, the results of this study suggest that real estate cycles are largely generated endogenously rather than being the result of unexpected exogenous shocks.
Resumo:
We present an outlook on the climate system thermodynamics. First, we construct an equivalent Carnot engine with efficiency and frame the Lorenz energy cycle in a macroscale thermodynamic context. Then, by exploiting the second law, we prove that the lower bound to the entropy production is times the integrated absolute value of the internal entropy fluctuations. An exergetic interpretation is also proposed. Finally, the controversial maximum entropy production principle is reinterpreted as requiring the joint optimization of heat transport and mechanical work production. These results provide tools for climate change analysis and for climate models’ validation.
Resumo:
A low cost, disposable instrument for measuring solar radiation during meteorological balloon flights through cloud layers is described. Using a photodiode detector and low thermal drift signal conditioning circuitry, the device showed less than 1% drift for temperatures varied from +20 °C to −35 °C. The angular response to radiation, which declined less rapidly than the cosine of the angle between the incident radiation and normal incidence, is used for cloud detection exploiting the motion of the platform. Oriented upwards, the natural motion imposed by the balloon allows cloud and clear air to be distinguished by the absence of radiation variability within cloud, where the diffuse radiation present is isotropic. The optical method employed by the solar radiation instrument has also been demonstrated to provide higher resolution measurements of cloud boundaries than relative humidity measurements alone.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.