44 resultados para Jeffrey Kroessler
Resumo:
A mononuclear complex [CuL] (1), a binuclear complex [Cu2LCl2(H2O)] (2), a trinuclear complex [Cu3L2](ClO4)(2) (3) involving o-phenylenediamine and salicylaldehyde and another binuclear complex of a tridentate ligand (H2L1) [Cu2L (2) (1) ](CH3COO)(2) (4) involving o-phenylenediamine and diacetylmonoxime have been synthesized, where H2L = N,N'-o-phenylenebis(salicylideneimine) and H2L1 = 3-(2-aminophenylimino)butan-2-one oxime. All the complexes have been characterized by elemental analyses, spectral and magnetic studies. The binuclear complex (2) was characterized structurally where the two Cu(II) centers are connected via an oxygen-bridged arrangement.
Resumo:
A near real-time flood detection algorithm giving a synoptic overview of the extent of flooding in both urban and rural areas, and capable of working during night-time and day-time even if cloud was present, could be a useful tool for operational flood relief management and flood forecasting. The paper describes an automatic algorithm using high resolution Synthetic Aperture Radar (SAR) satellite data that assumes that high resolution topographic height data are available for at least the urban areas of the scene, in order that a SAR simulator may be used to estimate areas of radar shadow and layover. The algorithm proved capable of detecting flooding in rural areas using TerraSAR-X with good accuracy, and in urban areas with reasonable accuracy.
Resumo:
A near real-time flood detection algorithm giving a synoptic overview of the extent of flooding in both urban and rural areas, and capable of working during night-time and day-time even if cloud was present, could be a useful tool for operational flood relief management. The paper describes an automatic algorithm using high resolution Synthetic Aperture Radar (SAR) satellite data that builds on existing approaches, including the use of image segmentation techniques prior to object classification to cope with the very large number of pixels in these scenes. Flood detection in urban areas is guided by the flood extent derived in adjacent rural areas. The algorithm assumes that high resolution topographic height data are available for at least the urban areas of the scene, in order that a SAR simulator may be used to estimate areas of radar shadow and layover. The algorithm proved capable of detecting flooding in rural areas using TerraSAR-X with good accuracy, classifying 89% of flooded pixels correctly, with an associated false positive rate of 6%. Of the urban water pixels visible to TerraSAR-X, 75% were correctly detected, with a false positive rate of 24%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 57% and 18% respectively.
Resumo:
xero, kline & coma is an artist run project space at 258 Hackney Road, London. It’s program, curated by Pil and Galia Kollectiv, focuses primarily on solo exhibitions by internationally established as well as emerging artists. Work by recent graduates King Conny Wobble and David Steans is being shown alongside projects like the Museum of American Art – Berlin, previously included in the Venice and Istanbul Biennales, Jeffrey Vallance, whose recent solo exhibition was at the Warhol Museum, and Plastique Fantastique, whose work has been shown at Tate Britain and the Pratt Manhatten Gallery, New York, with the aim of raising the profile of lesser known artists and allowing others to experiment with work that more institutional contexts don’t always permit. Some of the themes this program has explored have included fictional identities, a-chronological art histories and the mediation of ritual in time-based media. A commitment to critically engaged art is also central to the ethos of the space, and future shows include an exploration of unionism in art by Sophie Carapetian. As well as displaying new work, the gallery hosts events, talks and screenings. Most recently these have included meetings of the Political Currency of Art research group, a discussion and film screening dealing with the theme of ‘hostile objects’ led by Evan Calder Williams and Marina Vishmidt, a book launch for New Lines of Alliance, New Spaces of Liberty by Antonio Negri and Felix Guattari and an event dedicated to the theatre work of Slovenian art collective NSK, featuring a screening of unreleased documentation and a discussion about the future of total performance.
Resumo:
Traditionally, siting and sizing decisions for parks and reserves reflected ecological characteristics but typically failed to consider ecological costs created from displaced resource collection, welfare costs on nearby rural people, and enforcement costs. Using a spatial game-theoretic model that incorporates the interaction of socioeconomic and ecological settings, we show how incorporating more recent mandates that include rural welfare and surrounding landscapes can result in very different optimal sizing decisions. The model informs our discussion of recent forest management in Tanzania, reserve sizing and siting decisions, estimating reserve effectiveness, and determining patterns of avoided forest degradation in Reduced Emissions from Deforestation and Forest Degradation programs.
Resumo:
This paper examines the interaction of spatial and dynamic aspects of resource extraction from forests by local people. Highly cyclical and varied across space and time, the patterns of resource extraction resulting from the spatial–temporal model bear little resemblance to the patterns drawn from focusing either on spatial or temporal aspects of extraction alone. Ignoring this variability inaccurately depicts villagers’ dependence on different parts of the forest and could result in inappropriate policies. Similarly, the spatial links in extraction decisions imply that policies imposed in one area can have unintended consequences in other areas. Combining the spatial–temporal model with a measure of success in community forest management—the ability to avoid open-access resource degradation—characterizes the impact of incomplete property rights on patterns of resource extraction and stocks.
Resumo:
A common procedure for studying the effects on cognition of repetitive transcranial magnetic stimulation (rTMS) is to deliver rTMS concurrent with task performance, and to compare task performance on these trials versus on trials without rTMS. Recent evidence that TMS can have effects on neural activity that persist longer than the experimental session itself, however, raise questions about the assumption of the transient nature of rTMS that underlies many concurrent (or "online") rTMS designs. To our knowledge, there have been no studies in the cognitive domain examining whether the application of brief trains of rTMS during specific epochs of a complex task may have effects that spill over into subsequent task epochs, and perhaps into subsequent trials. We looked for possible immediate spill-over and longer-term cumulative effects of rTMS in data from two studies of visual short-term delayed recognition. In 54 subjects, 10-Hz rTMS trains were applied to five different brain regions during the 3-s delay period of a spatial task, and in a second group of 15 subjects, electroencephalography (EEG) was recorded while 10-Hz rTMS was applied to two brain areas during the 3-s delay period of both spatial and object tasks. No evidence for immediate effects was found in the comparison of the memory probe-evoked response on trials that were vs. were not preceded by delay-period rTMS. No evidence for cumulative effects was found in analyses of behavioral performance, and of EEG signal, as a function of task block. The implications of these findings, and their relation to the broader literature on acute vs. long-lasting effects of rTMS, are considered.
Resumo:
Current methods for estimating vegetation parameters are generally sub-optimal in the way they exploit information and do not generally consider uncertainties. We look forward to a future where operational dataassimilation schemes improve estimates by tracking land surface processes and exploiting multiple types of observations. Dataassimilation schemes seek to combine observations and models in a statistically optimal way taking into account uncertainty in both, but have not yet been much exploited in this area. The EO-LDAS scheme and prototype, developed under ESA funding, is designed to exploit the anticipated wealth of data that will be available under GMES missions, such as the Sentinel family of satellites, to provide improved mapping of land surface biophysical parameters. This paper describes the EO-LDAS implementation, and explores some of its core functionality. EO-LDAS is a weak constraint variational dataassimilationsystem. The prototype provides a mechanism for constraint based on a prior estimate of the state vector, a linear dynamic model, and EarthObservationdata (top-of-canopy reflectance here). The observation operator is a non-linear optical radiative transfer model for a vegetation canopy with a soil lower boundary, operating over the range 400 to 2500 nm. Adjoint codes for all model and operator components are provided in the prototype by automatic differentiation of the computer codes. In this paper, EO-LDAS is applied to the problem of daily estimation of six of the parameters controlling the radiative transfer operator over the course of a year (> 2000 state vector elements). Zero and first order process model constraints are implemented and explored as the dynamic model. The assimilation estimates all state vector elements simultaneously. This is performed in the context of a typical Sentinel-2 MSI operating scenario, using synthetic MSI observations simulated with the observation operator, with uncertainties typical of those achieved by optical sensors supposed for the data. The experiments consider a baseline state vector estimation case where dynamic constraints are applied, and assess the impact of dynamic constraints on the a posteriori uncertainties. The results demonstrate that reductions in uncertainty by a factor of up to two might be obtained by applying the sorts of dynamic constraints used here. The hyperparameter (dynamic model uncertainty) required to control the assimilation are estimated by a cross-validation exercise. The result of the assimilation is seen to be robust to missing observations with quite large data gaps.
Resumo:
We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the finger-print) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing.
Resumo:
A new tetranuclear complex, [Cu4L4](ClO4)4·2H2O (1), has been synthesized from the self-assembly of copper(II) perchlorate and the tridentate Schiff base ligand (2E,3E)-3-(2-aminopropylimino) butan-2-one oxime (HL). Single-crystal X-ray diffraction studies reveal that complex 1 consists of a Cu4(NO)4 core where the four copper(II) centers having square pyramidal environment are arranged in a distorted tetrahedral geometry. They are linked together by a rare bridging mode (μ3-η1,η2,η1) of oximato ligands. Analysis of magnetic susceptibility data indicates moderate antiferromagnetic (J1 = −48 cm−1, J2 = −40 cm−1 and J3 = −52 cm−1) exchange interaction through σ-superexchange pathways (in-plane bridging) of the oxime group. Theoretical calculations based on DFT technique have been used to obtain the energy states of different spin configurations and estimate the coupling constants and to understand the exact magnetic exchange pathways.
Resumo:
When villagers extract resources, such as fuelwood, fodder, or medicinal plants from forests, their decisions over where and how much to extract are influenced by market conditions, their particular opportunity costs of time, minimum consumption needs, and access to markets. This paper develops an optimization model of villagers’ extraction behavior that clarifies how, and under what conditions, policies that create incentives such as improved returns to extraction in a buffer zone might be used instead of adversarial enforcement efforts to protect a forest’s pristine ‘‘inner core.’’
Resumo:
Full-waveform laser scanning data acquired with a Riegl LMS-Q560 instrument were used to classify an orange orchard into orange trees, grass and ground using waveform parameters alone. Gaussian decomposition was performed on this data capture from the National Airborne Field Experiment in November 2006 using a custom peak-detection procedure and a trust-region-reflective algorithm for fitting Gauss functions. Calibration was carried out using waveforms returned from a road surface, and the backscattering coefficient c was derived for every waveform peak. The processed data were then analysed according to the number of returns detected within each waveform and classified into three classes based on pulse width and c. For single-peak waveforms the scatterplot of c versus pulse width was used to distinguish between ground, grass and orange trees. In the case of multiple returns, the relationship between first (or first plus middle) and last return c values was used to separate ground from other targets. Refinement of this classification, and further sub-classification into grass and orange trees was performed using the c versus pulse width scatterplots of last returns. In all cases the separation was carried out using a decision tree with empirical relationships between the waveform parameters. Ground points were successfully separated from orange tree points. The most difficult class to separate and verify was grass, but those points in general corresponded well with the grass areas identified in the aerial photography. The overall accuracy reached 91%, using photography and relative elevation as ground truth. The overall accuracy for two classes, orange tree and combined class of grass and ground, yielded 95%. Finally, the backscattering coefficient c of single-peak waveforms was also used to derive reflectance values of the three classes. The reflectance of the orange tree class (0.31) and ground class (0.60) are consistent with published values at the wavelength of the Riegl scanner (1550 nm). The grass class reflectance (0.46) falls in between the other two classes as might be expected, as this class has a mixture of the contributions of both vegetation and ground reflectance properties.