914 resultados para Distorted probabilities
Resumo:
Two pentaaza macrocycles containing pyridine in the backbone, namely 3,6,9,12,18-pentaazabicyclo[12.3.1] octadeca-1(18),14,16-triene ([15]pyN(5)), and 3,6,10,13,19-pentaazabicyclo[13.3.1]nonadeca-1(19),15,17-triene ([16]pyN(5)), were synthesized in good yields. The acid-base behaviour of these compounds was studied by potentiometry at 298.2 K in aqueous solution and ionic strength 0.10 M in KNO3. The protonation sequence of [15]pyN(5) was investigated by H-1 NMR titration that also allowed the determination of protonation constants in D2O. Binding studies of the two ligands with Ca2+, Ni2+, Cu2+, Zn2+, Cd2+, and Pb2+ metal ions were performed under the same experimental conditions. The results showed that all the complexes formed with the 15-membered ligand, particularly those of Cu2+ and especially Ni2+, are thermodynamically more stable than with the larger macrocycle. Cyclic voltammetric data showed that the copper(II) complexes of the two macrocycles exhibited analogous behaviour, with a single quasi-reversible one-electron transfer reduction process assigned to the Cu(II)/Cu(I) couple. The UV-visible-near IR spectroscopic and magnetic moment data of the nickel(II) complexes in solution indicated a tetragonal distorted coordination geometry for the metal centre. X-band EPR spectra of the copper(II) complexes are consistent with distorted square pyramidal geometries. The crystal structure of [Cu([15]pyN(5))](2+) determined by X-ray diffraction showed the copper(II) centre coordinated to all five macrocyclic nitrogen donors in a distorted square pyramidal environment.
Resumo:
Reaction of Cu(ClO(4))(2)center dot 6H(2)O with the 1:2 condensate of benzildihydrazone and 2-acetylpyridine, in methanol in equimolar ratio yields a green compound which upon recrystallisation from 1:1 CH(2)Cl(2)-C(6)H(6) mixture affords [CuL(H(2)O)](ClO(4))(2)center dot 1/2C(6)H(6). The complex crystallises in the space group P-1 with a = 8.028(11) angstrom, b = 12.316(17) angstrom, c = 18.14(3) angstrom, alpha = 97.191(10)degrees, beta = 94.657(10)degrees and gamma = 108.039(10)degrees. It is single helical with the metal having a distorted trigonal bipyramidal N(4)O coordination sphere. The acid dissociation constant of the Cu(I) complex in CH(3)CN is 3.34 +/- 0.19. The X band EPR spectrum of the compound is rhombic with g(1) = 2.43, g(2) = 2.10 g(3) = 2.02 and A(1) = 79.3 x 10(-4) cm(-1). The Cu(II/I) potential of the complex in CH(2)Cl(2) at a glassy carbon electrode is 0.43 V vs SCE. It is argued that the copper-water bond persists in the corresponding copper(I) species. Its implications on the single helix-double helix interconversion in copper helicates are discussed. DFT calculations at the B3LYP/6-311G** level shows that the binding energy of water in the single helicol live-coordinate copper(I) species [CuL(H(2)O)](+) is similar to 40 kJ mol(-1).
Resumo:
The synthesis of two new sodium perchlorate adducts (1:2 and 1:3) with copper(II) "ligand-complexes'' is reported. One adduct is trinuclear [(CuL(1))(2)NaClO(4)] (1) and the other is tetranuclear [(CuL(2))(3)Na]ClO(4)center dot EtOH (2). The ligands are the tetradentate di-Schiff base of 1,3-propanediamines and salicylaldehyde (H(2)L(1)) or 2-hydroxyacetophenone (H(2)L(2)). Both complexes have been characterized by X-ray single crystal structure analyses. In both structures, the sodium cation has a six-coordinate distorted octahedral environment being bonded to four oxygen atoms from two Schiff-base complexes in addition to a chelated perchlorate anion in 1 and to six oxygen atoms from three Schiff-base complexes in 2. We have carried out a DFT theoretical study (RI-B97-D/def2-SVP level of theory) to compute and compare the formation energies of 1:2 and 1:3 adducts. The DFT study reveals that the latter is more stabilized than the former. The X-ray crystal structure of 1 shows that the packing of the trinuclear unit is controlled by unconventional C-H center dot center dot center dot O H-bonds and Cu(2+)-pi non-covalent interactions. These interactions explain the formation of 1 which is a priori disfavored with respect to 2.
Resumo:
Four new nickel(II) complexes, [Ni2L2(NO2)2]·CH2Cl2·C2H5OH, 2H2O (1), [Ni2L2(DMF)2(m-NO2)]ClO4·DMF (2a), [Ni2L2(DMF)2(m-NO2)]ClO4 (2b) and [Ni3L¢2(m3-NO2)2(CH2Cl2)]n·1.5H2O (3) where HL = 2-[(3-amino-propylimino)-methyl]-phenol, H2L¢ = 2-({3-[(2-hydroxy-benzylidene)-amino]-propylimino}-methyl)-phenol and DMF = N,N-dimethylformamide, have been synthesized starting with the precursor complex [NiL2]·2H2O, nickel(II) perchlorate and sodium nitrite and characterized structurally and magnetically. The structural analyses reveal that in all the complexes, NiII ions possess a distorted octahedral geometry. Complex 1 is a dinuclear di-m2-phenoxo bridged species in which nitrite ion acts as chelating co-ligand. Complexes 2a and 2b also consist of dinuclear entities, but in these two compounds a cis-(m-nitrito-1kO:2kN) bridge is present in addition to the di-m2-phenoxo bridge. The molecular structures of 2a and 2b are equivalent; they differ only in that 2a contains an additional solvated DMF molecule. Complex 3 is formed by ligand rearrangement and is a one-dimensional polymer in which double phenoxo as well as m-nitrito-1kO:2kN bridged trinuclear units are linked through a very rare m3-nitrito-1kO:2kN:3kO¢ bridge. Analysis of variable-temperature magnetic susceptibility data indicates that there is a global weak antiferromagnetic interaction between the nickel(II) ions in four complexes, with exchange parameters J of -5.26, -11.45, -10.66 and -5.99 cm-1 for 1, 2a, 2b and 3, respectively
Resumo:
Glacier fluctuations exclusively due to internal variations in the climate system are simulated using downscaled integrations of the ECHAM4/OPYC coupled general circulation model (GCM). A process-based modeling approach using a mass balance model of intermediate complexity and a dynamic ice flow model considering simple shearing flow and sliding are applied. Multimillennia records of glacier length fluctuations for Nigardsbreen (Norway) and Rhonegletscher (Switzerland) are simulated using autoregressive processes determined by statistically downscaled GCM experiments. Return periods and probabilities of specific glacier length changes using GCM integrations excluding external forcings such as solar irradiation changes, volcanic, or anthropogenic effects are analyzed and compared to historical glacier length records. Preindustrial fluctuations of the glaciers as far as observed or reconstructed, including their advance during the “Little Ice Age,” can be explained by internal variability in the climate system as represented by a GCM. However, fluctuations comparable to the present-day glacier retreat exceed any variation simulated by the GCM control experiments and must be caused by external forcing, with anthropogenic forcing being a likely candidate.
Resumo:
A necessary condition for a good probabilistic forecast is that the forecast system is shown to be reliable: forecast probabilities should equal observed probabilities verified over a large number of cases. As climate change trends are now emerging from the natural variability, we can apply this concept to climate predictions and compute the reliability of simulated local and regional temperature and precipitation trends (1950–2011) in a recent multi-model ensemble of climate model simulations prepared for the Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5). With only a single verification time, the verification is over the spatial dimension. The local temperature trends appear to be reliable. However, when the global mean climate response is factored out, the ensemble is overconfident: the observed trend is outside the range of modelled trends in many more regions than would be expected by the model estimate of natural variability and model spread. Precipitation trends are overconfident for all trend definitions. This implies that for near-term local climate forecasts the CMIP5 ensemble cannot simply be used as a reliable probabilistic forecast.
Resumo:
In this paper, we develop a method, termed the Interaction Distribution (ID) method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1), pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2), qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.
Resumo:
Wine production is largely governed by atmospheric conditions, such as air temperature and precipitation, together with soil management and viticultural/enological practices. Therefore, anthropogenic climate change is likely to have important impacts on the winemaking sector worldwide. An important winemaking region is the Portuguese Douro Valley, which is known by its world-famous Port Wine. The identification of robust relationships between atmospheric factors and wine parameters is of great relevance for the region. A multivariate linear regression analysis of a long wine production series (1932–2010) reveals that high rainfall and cool temperatures during budburst, shoot and inflorescence development (February-March) and warm temperatures during flowering and berry development (May) are generally favourable to high production. The probabilities of occurrence of three production categories (low, normal and high) are also modelled using multinomial logistic regression. Results show that both statistical models are valuable tools for predicting the production in a given year with a lead time of 3–4 months prior to harvest. These statistical models are applied to an ensemble of 16 regional climate model experiments following the SRES A1B scenario to estimate possible future changes. Wine production is projected to increase by about 10 % by the end of the 21st century, while the occurrence of high production years is expected to increase from 25 % to over 60 %. Nevertheless, further model development will be needed to include other aspects that may shape production in the future. In particular, the rising heat stress and/or changes in ripening conditions could limit the projected production increase in future decades.
Resumo:
Backtracks aimed to investigate critical relationships between audio-visual technologies and live performance, emphasising technologies producing sound, contrasted with non-amplified bodily sound. Drawing on methodologies for studying avant garde theatre, live performance and the performing body, it was informed by work in critical and cultural theory by, for example, Steven Connor and Jonathan Rée, on the body's experience and interpretation of sound. The performance explored how shifting national boundaries, mobile workforces, complex family relationships, cultural pluralities and possibilities for bodily transformation have compelled a re-evaluation of what it means to feel 'at home' in modernity. Using montages of live and mediated images, disrupted narratives and sound, it evoked destablised identities which characterise contemporary lived experience, and enacted the displacement of certainties provided by family and nation, community and locality, body and selfhood. Homer's Odyssey framed the performance: elements could be traced in the mise-en-scène; in the physical presence of Athene, the narrator and Penelope weaving mementoes from the past into her loom; and in voice-overs from Homer's work. The performance drew on personal experiences and improvisations, structured around notions of journey. It presented incomplete narratives, memories, repressed anxieties and dreams through different combinations of sounds, music, mediated images, movement, voice and bodily sound. The theme of travel was intensified by performers carrying suitcases and umbrellas, by soundtracks incorporating travel effects, and by the distorted video images of forms of transport playing across 'screens' which proliferated across the space (sails, umbrellas, the loom, actors' bodies). The performance experimented with giving sound and silence performative dimensions, including presenting sound in visual and imagistic ways, for example by using signs from deaf sign language. Through-composed soundtracks of live and recorded song, music, voice-over, and noise exploited the viscerality of sound and disrupted cognitive interpretation by phenomenological, somatic experience, thereby displacing the impulse for closure/destination/home.
Resumo:
The nonlinearity of high-power amplifiers (HPAs) has a crucial effect on the performance of multiple-input-multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (OSTBC) systems in the presence of nonlinear HPAs. Specifically, we propose a constellation-based compensation method for HPA nonlinearity in the case with knowledge of the HPA parameters at the transmitter and receiver, where the constellation and decision regions of the distorted transmitted signal are derived in advance. Furthermore, in the scenario without knowledge of the HPA parameters, a sequential Monte Carlo (SMC)-based compensation method for the HPA nonlinearity is proposed, which first estimates the channel-gain matrix by means of the SMC method and then uses the SMC-based algorithm to detect the desired signal. The performance of the MIMO-OSTBC system under study is evaluated in terms of average symbol error probability (SEP), total degradation (TD) and system capacity, in uncorrelated Nakagami-m fading channels. Numerical and simulation results are provided and show the effects on performance of several system parameters, such as the parameters of the HPA model, output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, modulation order of quadrature amplitude modulation (QAM), and number of SMC samples. In particular, it is shown that the constellation-based compensation method can efficiently mitigate the effect of HPA nonlinearity with low complexity and that the SMC-based detection scheme is efficient to compensate for HPA nonlinearity in the case without knowledge of the HPA parameters.
Resumo:
A procedure is described in which patients are randomized between two experimental treatments and a control. At a series of interim analyses, each experimental treatment is compared with control. One of the experimental treatments might then be found sufficiently superior to the control for it to be declared the best treatment, and the trial stopped. Alternatively, experimental treatments might be eliminated from further consideration at any stage. It is shown how the procedure can be conducted while controlling overall error probabilities. Data concerning evaluation of different doses of riluzole in the treatment of motor neurone disease are used for illustration.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud masks. Here, the technique is shown to be suitable for daytime applications over land and sea, using visible and near-infrared imagery, in addition to thermal infrared. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 89% and 73% for ocean and land, respectively using the Bayesian technique, compared to 90% and 70%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Neutron diffraction at 11.4 and 295 K and solid-state 67Zn NMR are used to determine both the local and average structures in the disordered, negative thermal expansion (NTE) material, Zn(CN)2. Solid-state NMR not only confirms that there is head-to-tail disorder of the C≡N groups present in the solid, but yields information about the relative abundances of the different Zn(CN)4-n(NC)n tetrahedral species, which do not follow a simple binomial distribution. The Zn(CN)4 and Zn(NC)4 species occur with much lower probabilities than are predicted by binomial theory, supporting the conclusion that they are of higher energy than the other local arrangements. The lowest energy arrangement is Zn(CN)2(NC)2. The use of total neutron diffraction at 11.4 K, with analysis of both the Bragg diffraction and the derived total correlation function, yields the first experimental determination of the individual Zn−N and Zn−C bond lengths as 1.969(2) and 2.030(2) Å, respectively. The very small difference in bond lengths, of ~0.06 Å, means that it is impossible to obtain these bond lengths using Bragg diffraction in isolation. Total neutron diffraction also provides information on both the average and local atomic displacements responsible for NTE in Zn(CN)2. The principal motions giving rise to NTE are shown to be those in which the carbon and nitrogen atoms within individual Zn−C≡N−Zn linkages are displaced to the same side of the Zn···Zn axis. Displacements of the carbon and nitrogen atoms to opposite sides of the Zn···Zn axis, suggested previously in X-ray studies as being responsible for NTE behavior, in fact make negligible contribution at temperatures up to 295 K.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.