130 resultados para 3-DIMENSIONAL FRAMEWORK
Resumo:
We perturb the SC, BCC, and FCC crystal structures with a spatial Gaussian noise whose adimensional strength is controlled by the parameter a, and analyze the topological and metrical properties of the resulting Voronoi Tessellations (VT). The topological properties of the VT of the SC and FCC crystals are unstable with respect to the introduction of noise, because the corresponding polyhedra are geometrically degenerate, whereas the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. For weak noise, the mean area of the perturbed BCC and FCC crystals VT increases quadratically with a. In the case of perturbed SCC crystals, there is an optimal amount of noise that minimizes the mean area of the cells. Already for a moderate noise (a>0.5), the properties of the three perturbed VT are indistinguishable, and for intense noise (a>2), results converge to the Poisson-VT limit. Notably, 2-parameter gamma distributions are an excellent model for the empirical of of all considered properties. The VT of the perturbed BCC and FCC structures are local maxima for the isoperimetric quotient, which measures the degre of sphericity of the cells, among space filling VT. In the BCC case, this suggests a weaker form of the recentluy disproved Kelvin conjecture. Due to the fluctuations of the shape of the cells, anomalous scalings with exponents >3/2 is observed between the area and the volumes of the cells, and, except for the FCC case, also for a->0. In the Poisson-VT limit, the exponent is about 1.67. As the number of faces is positively correlated with the sphericity of the cells, the anomalous scaling is heavily reduced when we perform powerlaw fits separately on cells with a specific number of faces.
Resumo:
The understanding of the statistical properties and of the dynamics of multistable systems is gaining more and more importance in a vast variety of scientific fields. This is especially relevant for the investigation of the tipping points of complex systems. Sometimes, in order to understand the time series of given observables exhibiting bimodal distributions, simple one-dimensional Langevin models are fitted to reproduce the observed statistical properties, and used to investing-ate the projected dynamics of the observable. This is of great relevance for studying potential catastrophic changes in the properties of the underlying system or resonant behaviours like those related to stochastic resonance-like mechanisms. In this paper, we propose a framework for encasing this kind of studies, using simple box models of the oceanic circulation and choosing as observable the strength of the thermohaline circulation. We study the statistical properties of the transitions between the two modes of operation of the thermohaline circulation under symmetric boundary forcings and test their agreement with simplified one-dimensional phenomenological theories. We extend our analysis to include stochastic resonance-like amplification processes. We conclude that fitted one-dimensional Langevin models, when closely scrutinised, may result to be more ad-hoc than they seem, lacking robustness and/or well-posedness. They should be treated with care, more as an empiric descriptive tool than as methodology with predictive power.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
The task of this paper is to develop a Time-Domain Probe Method for the reconstruction of impenetrable scatterers. The basic idea of the method is to use pulses in the time domain and the time-dependent response of the scatterer to reconstruct its location and shape. The method is based on the basic causality principle of timedependent scattering. The method is independent of the boundary condition and is applicable for limited aperture scattering data. In particular, we discuss the reconstruction of the shape of a rough surface in three dimensions from time-domain measurements of the scattered field. In practise, measurement data is collected where the incident field is given by a pulse. We formulate the time-domain fieeld reconstruction problem equivalently via frequency-domain integral equations or via a retarded boundary integral equation based on results of Bamberger, Ha-Duong, Lubich. In contrast to pure frequency domain methods here we use a time-domain characterization of the unknown shape for its reconstruction. Our paper will describe the Time-Domain Probe Method and relate it to previous frequency-domain approaches on sampling and probe methods by Colton, Kirsch, Ikehata, Potthast, Luke, Sylvester et al. The approach significantly extends recent work of Chandler-Wilde and Lines (2005) and Luke and Potthast (2006) on the timedomain point source method. We provide a complete convergence analysis for the method for the rough surface scattering case and provide numerical simulations and examples.
Resumo:
Data assimilation algorithms are a crucial part of operational systems in numerical weather prediction, hydrology and climate science, but are also important for dynamical reconstruction in medical applications and quality control for manufacturing processes. Usually, a variety of diverse measurement data are employed to determine the state of the atmosphere or to a wider system including land and oceans. Modern data assimilation systems use more and more remote sensing data, in particular radiances measured by satellites, radar data and integrated water vapor measurements via GPS/GNSS signals. The inversion of some of these measurements are ill-posed in the classical sense, i.e. the inverse of the operator H which maps the state onto the data is unbounded. In this case, the use of such data can lead to significant instabilities of data assimilation algorithms. The goal of this work is to provide a rigorous mathematical analysis of the instability of well-known data assimilation methods. Here, we will restrict our attention to particular linear systems, in which the instability can be explicitly analyzed. We investigate the three-dimensional variational assimilation and four-dimensional variational assimilation. A theory for the instability is developed using the classical theory of ill-posed problems in a Banach space framework. Further, we demonstrate by numerical examples that instabilities can and will occur, including an example from dynamic magnetic tomography.
Resumo:
Multi-gas approaches to climate change policies require a metric establishing ‘equivalences’ among emissions of various species. Climate scientists and economists have proposed four kinds of such metrics and debated their relative merits. We present a unifying framework that clarifies the relationships among them. We show, as have previous authors, that the global warming potential (GWP), used in international law to compare emissions of greenhouse gases, is a special case of the global damage potential (GDP), assuming (1) a finite time horizon, (2) a zero discount rate, (3) constant atmospheric concentrations, and (4) impacts that are proportional to radiative forcing. Both the GWP and GDP follow naturally from a cost–benefit framing of the climate change issue. We show that the global temperature change potential (GTP) is a special case of the global cost potential (GCP), assuming a (slight) fall in the global temperature after the target is reached. We show how the four metrics should be generalized if there are intertemporal spillovers in abatement costs, distinguishing between private (e.g., capital stock turnover) and public (e.g., induced technological change) spillovers. Both the GTP and GCP follow naturally from a cost-effectiveness framing of the climate change issue. We also argue that if (1) damages are zero below a threshold and (2) infinitely large above a threshold, then cost-effectiveness analysis and cost–benefit analysis lead to identical results. Therefore, the GCP is a special case of the GDP. The UN Framework Convention on Climate Change uses the GWP, a simplified cost–benefit concept. The UNFCCC is framed around the ultimate goal of stabilizing greenhouse gas concentrations. Once a stabilization target has been agreed under the convention, implementation is clearly a cost-effectiveness problem. It would therefore be more consistent to use the GCP or its simplification, the GTP.
Resumo:
Sampling strategies for monitoring the status and trends in wildlife populations are often determined before the first survey is undertaken. However, there may be little information about the distribution of the population and so the sample design may be inefficient. Through time, as data are collected, more information about the distribution of animals in the survey region is obtained but it can be difficult to incorporate this information in the survey design. This paper introduces a framework for monitoring motile wildlife populations within which the design of future surveys can be adapted using data from past surveys whilst ensuring consistency in design-based estimates of status and trends through time. In each survey, part of the sample is selected from the previous survey sample using simple random sampling. The rest is selected with inclusion probability proportional to predicted abundance. Abundance is predicted using a model constructed from previous survey data and covariates for the whole survey region. Unbiased design-based estimators of status and trends and their variances are derived from two-phase sampling theory. Simulations over the short and long-term indicate that in general more precise estimates of status and trends are obtained using this mixed strategy than a strategy in which all of the sample is retained or all selected with probability proportional to predicted abundance. Furthermore the mixed strategy is robust to poor predictions of abundance. Estimates of status are more precise than those obtained from a rotating panel design.
Resumo:
To retain competitiveness, succeed and flourish, organizations are forced to continuously innovate. This drive for innovation is not solely limited to product/process innovation but more profoundly relates to a continuous process of improving how organizations work internally, requiring a constant stream of ideas and suggestions from motivated employees. In this chapter we investigate some recent developments and propose a conceptual framework for creative participation as a personality driven interface between creativity and innovation. Under the assumption that employees’ intrinsic willingness to contribute novel ideas and solutions requires a set of personal characteristics and necessary skill that might well be unique to each organizational unit, the chapter then explores personal characteristics associated with creativity, innovation and innovative behavior. Various studies on the correlation between creativity and personality types are also reviewed. The chapter provides a discussion of solutions and future development together with recommendations for the future research.
Resumo:
This article considers the evolution and impact on schools in England of the "Framework for English" since its introduction in 2001, a national initiative that follows on from the National Literacy Strategy, which focused on primary schools. Whilst acknowledging that the Framework is part of a whole school policy, "The Key Stage Three Strategy", I concentrate on its direct impact on the school subject "English" and on standards within that subject. Such a discussion must incorporate some consideration of the rise of "Literacy" as a dominant term and theme in England (and globally) and its challenge to a politically controversial and much contested curriculum area, i.e. "English". If the Framework is considered within the context of the Literacy drive since the mid-1990s then it can be see to be evolving within a much changed policy context and therefore likely to change substantially in the next few years. In a global context England has been regarded for some time as at the extreme edge of standards-driven policy and practice. It is hoped that the story of "English" in England may be salutary to educators from other countries.
Resumo:
The proteome of Salmonella enterica serovar Typhimurium was characterized by 2-dimensional HPLC mass spectrometry to provide a platform for subsequent proteomic investigations of low level multiple antibiotic resistance (MAR). Bacteria (2.15 +/- 0.23 x 10(10) cfu; mean +/- s.d.) were harvested from liquid culture and proteins differentially fractionated, on the basis of solubility, into preparations representative of the cytosol, cell envelope and outer membrane proteins (OMPs). These preparations were digested by treatment with trypsin and peptides separated into fractions (n = 20) by strong cation exchange chromatography (SCX). Tryptic peptides in each SCX fraction were further separated by reversed-phase chromatography and detected by mass spectrometry. Peptides were assigned to proteins and consensus rank listings compiled using SEQUEST. A total of 816 +/- 11 individual proteins were identified which included 371 +/- 33, 565 +/- 15 and 262 +/- 5 from the cytosolic, cell envelope and OMP preparations, respectively. A significant correlation was observed (r(2) = 0.62 +/- 0.10; P < 0.0001) between consensus rank position for duplicate cell preparations and an average of 74 +/- 5% of proteins were common to both replicates. A total of 34 outer membrane proteins were detected, 20 of these from the OMP preparation. A range of proteins (n = 20) previously associated with the mar locus in E. coli were also found including the key MAR effectors AcrA, TolC and OmpF.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
Undeniably, anticipation plays a crucial role in cognition. By what means, to what extent, and what it achieves remain open questions. In a recent BBS target article, Clark (in press) depicts an integrative model of the brain that builds on hierarchical Bayesian models of neural processing (Rao and Ballard, 1999; Friston, 2005; Brown et al., 2011), and their most recent formulation using the free-energy principle borrowed from thermodynamics (Feldman and Friston, 2010; Friston, 2010; Friston et al., 2010). Hierarchical generative models of cognition, such as those described by Clark, presuppose the manipulation of representations and internal models of the world, in as much detail as is perceptually available. Perhaps surprisingly, Clark acknowledges the existence of a “virtual version of the sensory data” (p. 4), but with no reference to some of the historical debates that shaped cognitive science, related to the storage, manipulation, and retrieval of representations in a cognitive system (Shanahan, 1997), or accounting for the emergence of intentionality within such a system (Searle, 1980; Preston and Bishop, 2002). Instead of demonstrating how this Bayesian framework responds to these foundational questions, Clark describes the structure and the functional properties of an action-oriented, multi-level system that is meant to combine perception, learning, and experience (Niedenthal, 2007).
Resumo:
Three new zinc(II)-hexamethylenetetramine (hmt) complexes [Zn-2(4-nbz)(4)(mu(2)-hmt)(OH2)(hmt)] (1). [Zn-2(2-nbz)(4)(mu(2)-hmt)(2)](n) (2) and [Zn-3(3-nbz)(4)(mu(2)-hmt)(mu(2)-OH)(mu(3)-OH)](n) (3) with three isomeric nitrobenzoate, [4-nbz = 4-nitrobenzoate, 2-nbz = 2-nitrobenzoate and 3-nbz = 3-nitrobenzoate] have been synthesized and structurally characterized by X-ray crystallography. Their identities have also been established by elemental analysis: IR, NMR, UV-Vis and mass spectral studies. 1 is a dinuclear complex formed by bridging hmt with mu(2) coordinating mode. The geometry around the Zn centers in 1 is distorted tetrahedral. Paddle-wheel centrosymmetric Zn-2(2-nbz)(4) units of complex 2 are interconnected by mu(2)-hmt forming a one-dimensional chain with square-pyramidal geometries around the Zn centers. Compound 3 contains a mu(2)/mu(3)-hydroxido and mu(2)-hmt bridged 1D chain. In this complex, varied geometries around the Zn centers are observed viz, tetrahedral, square pyramidal and trigonal bipyramidal. Various weak forces, i.e. lone pair-pi, pi-pi and CH-pi interactions, play a key role in stabilizing the observed structures for complexes 1,2 and 3. This series of complexes demonstrates that although the nitro group does not coordinate to the metal center, its presence at the 2-, 3- or 4-position of the phenyl ring has a striking effect on the dimensionality as well as the structure of the resulted coordination polymers, probably due to the participation of the nitro group in 1.p.center dot center dot center dot pi and/or C-H center dot center dot center dot pi interactions.
Resumo:
Three new Mn(II) coordination compounds {[Mn(NCNCN)2(azpy)]·0.5azpy}n (1), {[Mn(NCS)2(azpy)(CH3OH)2]·azpy}n (2), and [Mn(azpy)2(H2O)4][Mn(azpy)(H2O)5]·4PF6·H2O·5.5azpy (3) (where azpy = 4,4'-azobis-(pyridine)) have been synthesized by self-assembly of the primary ligands, dicyanamide, thiocyanate, and hexafluorophosphate, respectively, together with azpy as the secondary spacer. All three complexes were characterized by elemental analyses, IR spectroscopy, thermal analyses, and single crystal X-ray crystallography. The structural analyses reveal that complex 1 forms a two-dimensional (2D) grid sheet motif These sheets assemble to form a microporous framework that incorporates coordination-free azpy by host-guest pi center dot center dot center dot pi. and C-H center dot center dot center dot N hydrogen bonding interactions. Complex 2 features azpy bridged one-dimensional (ID) chains of centrosymmetric [Mn(NCS)(2)(CH3OH)(2)} units which form a 2D porous sheet via a CH3 center dot center dot center dot pi supramolecular interaction. A guest azpy molecule is incorporated within the pores by strong H-bonding interactions. Complex 3 affords a 0-D motif with two monomeric Mn(II) units in the asymmetric unit. There exist pi center dot center dot center dot pi, anion center dot center dot center dot pi, and strong hydrogen bonding interactions between the azpy, water, and the anions. Density functional theory (DFT) calculations, at the M06/6-31+G* level of theory, are used to characterize a great variety of interactions that explicitly show the importance of host-guest supramolecular interactions for the stabilization of coordination compounds and creation of the fascinating three-dimensional (3D) architecture of the title compounds.
Resumo:
A metal organic framework of Cu-II, tartarate (tar) and 2,2'-bipyridyl (2,2'-bipy)], {[Cu(tar)(2,2'-bipy)]center dot 5H(2)O}(n)} (1) has been synthesized at the mild ambient condition and characterized by single crystal X-ray crystallography. In the compound, the Cu(2,2'-bipy) entities are bridged by tartarate ions which are coordinated to Cu-II by both hydroxyl and monodentate carboxylate oxygen to form a one-dimensional chain. The non-coordinated water molecules form ID water chains by edge-sharing cyclic water pentamers along with dangling water dimers. It shows reversible water expulsion upon heating. The water chains join the ID coordination polymeric chains to a 31) network through hydrogen-bond interactions.