968 resultados para Local electronic structures
Resumo:
Local structures around host Ce and dopant Y cations in 10 mol% Y2O3 doped ceria solid solutions have been investigated by room and high temperature EXAFS spectroscopy. The results show that the local structures around the Cc cation in doped ceria samples are similar to that in the fluorite CeO2 structure though the coordination numbers of Ce-O tend to be smaller than 8. The local structures around Y cation, however, are significantly different from those around Ce cation, and show more resemblance to that around Y cation in the C-type Y2O3 Structure. A more accurate description of the local structures around Y cation in doped ceria was given by analyzing Y-K edge EXAFS spectra based on the C-type Y2O3 structure. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Current British government economic development policy emphasises regional and sub-regional scale, multi-agent initiatives that form part of national frameworks to encourage a 'bottom up' approach to economic development. An emphasis on local multi-agent initiatives was also the mission of Training and Enterprise Councils (TECs). Using new survey evidence this article tracks the progress of a number of initiatives established under the TECs, using the TEC Discretionary Fund as an example. It assesses the ability of successor bodies to be more effective in promoting local economic development. Survey evidence is used to confirm that many projects previously set up by the TECs continue to operate successfully under new partnership arrangements. However as new structures have developed, and policy has become more centralized, it is less likely that similar local initiatives will be developed in future. There is evidence to suggest that with the end of the TECs a gap has emerged in the institutional infrastructure for local economic development, particularly with regard to workforce development. Much will depend in future on how the Regional Development Agencies deploy their growing power and resources.
Resumo:
Local Government Authorities (LGAs) are mainly characterised as information-intensive organisations. To satisfy their information requirements, effective information sharing within and among LGAs is necessary. Nevertheless, the dilemma of Inter-Organisational Information Sharing (IOIS) has been regarded as an inevitable issue for the public sector. Despite a decade of active research and practice, the field lacks a comprehensive framework to examine the factors influencing Electronic Information Sharing (EIS) among LGAs. The research presented in this paper contributes towards resolving this problem by developing a conceptual framework of factors influencing EIS in Government-to-Government (G2G) collaboration. By presenting this model, we attempt to clarify that EIS in LGAs is affected by a combination of environmental, organisational, business process, and technological factors and that it should not be scrutinised merely from a technical perspective. To validate the conceptual rationale, multiple case study based research strategy was selected. From an analysis of the empirical data from two case organisations, this paper exemplifies the importance (i.e. prioritisation) of these factors in influencing EIS by utilising the Analytical Hierarchy Process (AHP) technique. The intent herein is to offer LGA decision-makers with a systematic decision-making process in realising the importance (i.e. from most important to least important) of EIS influential factors. This systematic process will also assist LGA decision-makers in better interpreting EIS and its underlying problems. The research reported herein should be of interest to both academics and practitioners who are involved in IOIS, in general, and collaborative e-Government, in particular. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This thesis divides into two distinct parts, both of which are underpinned by the tight-binding model. The first part covers our implementation of the tight-binding model in conjunction with the Berry phase theory of electronic polarisation to probe the atomistic origins of spontaneous polarisation and piezoelectricity as well as attempting to accurately calculate the values and coefficients associated with these phenomena. We first develop an analytic model for the polarisation of a one-dimensional linear chain of atoms. We compare the zincblende and ideal wurtzite structures in terms of effective charges, spontaneous polarisation and piezoelectric coefficients, within a first nearest neighbour tight-binding model. We further compare these to real wurtzite structures and conclude that accurate quantitative results are beyond the scope of this model but qualitative trends can still be described. The second part of this thesis deals with implementing the tight-binding model to investigate the effect of local alloy fluctuations in bulk AlGaN alloys and InGaN quantum wells. We calculate the band gap evolution of Al1_xGaxN across the full composition range and compare it to experiment as well as fitting bowing parameters to the band gap as well as to the conduction band and valence band edges. We also investigate the wavefunction character of the valence band edge to determine the composition at which the optical polarisation switches in Al1_xGaxN alloys. Finally, we examine electron and hole localisation in InGaN quantum wells. We show how the built-in field localises the carriers along the c-axis and how local alloy fluctuations strongly localise the highest hole states in the c-plane, while the electrons remain delocalised in the c-plane. We show how this localisation affects the charge density overlap and also investigate the effect of well width fluctuations on the localisation of the electrons.
Resumo:
Le fichiers qui accompagnent mon document ont été réalisés avec le logiciel Mathematica
Resumo:
On March 11 2011, an exceptionally large tsunami event was triggered by a massive earthquake offshore, the northeast coast of Japan, which affected coastal infrastructure such as seawalls, coastal dikes and breakwaters in the Tohoku region. Such infrastructure was built to protect against the Level 1 tsunamis that previously hit the region, but not for events as significant as the 2011 Tohoku tsunami, which was categorized as a Level 2 tsunami [Shibayama et al. 2013]. The failure mechanisms of concrete-armoured dikes, breakwaters and seawalls due to Level 2 tsunamis are still not fully understood by researchers and engineers. This paper investigates the failure modes and mechanisms of damaged coastal structures in Miyagi and Fukushima Prefectures, following the authors' post-disaster field surveys carried out between 2011 and 2013. Six significant failure mechanisms were identified for the coastal dikes and seawalls affected by this tsunami: 1) Leeward toe scour failure, 2) Crown armour failure, 3) Leeward slope armour failure, 4) Seaward toe and armour failure, 5) Overturning failure, and 6) Parapet wall failure, in which leeward toe scour being recognized as the major failure mechanism in most surveyed locations. The authors also propose a simple practical mathematical model for predicting the scour depth at the leeward toe of the coastal dikes, by considering the effects of the tsunami hydrodynamics, the soil properties and the type of structure. The key advantage of this model is that it depends entirely on quantities that are measurable in the field. Furthermore this model was further refined by conducting a series of hydraulic model experiments aimed to understand the governing factors of the leeward toe scour failure. Finally, based on the results obtained, key recommendations are given for the design of resilient coastal defence structures that can survive a level 2 tsunami event.
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
Le fichiers qui accompagnent mon document ont été réalisés avec le logiciel Mathematica
Resumo:
Property taxes serve as a vital revenue source for local governments. The revenues derived from the property tax function as the primary funding source for a variety of critical local public service systems. Property tax appeal systems serve as quasi-administrative-judicial mechanisms intended to assure the public that property tax assessments are correct, fair, and equitable. Despite these important functions, there is a paucity of empirical research related to property tax appeal systems. This study contributes to property tax literature by identifying who participates in the property tax appeal process and examining their motivations for participation. In addition, the study sought to determine whether patterns of use and success in appeal systems affected the distribution of the tax burden. Data were collected by means of a survey distributed to single-family property owners from two Florida counties. In addition, state and county documents were analyzed to determine appeal patterns and examine the impact on assessment uniformity, over a three-year period. The survey data provided contextual evidence that single-family property owners are not as troubled by property taxes as they are by the conduct of local government officials. The analyses of the decision to appeal indicated that more expensive properties and properties excluded from initial uniformity analyses were more likely to be appealed, while properties with homestead exemptions were less likely to be appealed. The value change analyses indicated that appeals are clustered in certain geographical areas; however, these areas do not always experience a greater percentage of the value changes. Interestingly, professional representation did not increase the probability of obtaining a reduction in value. Other relationships between the variables were discovered, but often with weak predictive ability. Findings from the assessment uniformity analyses were also interesting. The results indicated that the appeals mechanisms in both counties improved assessment uniformity. On average, appealed properties exhibited greater horizontal and vertical inequities, as compared to non-appealed properties, prior to the appeals process. After, the appeal process was completed; the indicators of horizontal and vertical equity were largely improved. However, there were some indications of regressivity in the final year of the study.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
The n→π* absorption transition of formaldehyde in water is analyzed using combined and sequential classical Monte Carlo (MC) simulations and quantum mechanics (QM) calculations. MC simulations generate the liquid solute-solvent structures for subsequent QM calculations. Using time-dependent density functional theory in a localized set of gaussian basis functions (TD-DFT/6-311++G(d,p)) calculations are made on statistically relevant configurations to obtain the average solvatochromic shift. All results presented here use the electrostatic embedding of the solvent. The statistically converged average result obtained of 2300 cm-1 is compared to previous theoretical results available. Analysis is made of the effective dipole moment of the hydrogen-bonded shell and how it could be held responsible for the polarization of the solvent molecules in the outer solvation shells.
Resumo:
This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM) presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named ""power deflation"", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.
Resumo:
Context. The formation and evolution of the Galactic bulge and its relationship with the other Galactic populations is still poorly understood. Aims. To establish the chemical differences and similarities between the bulge and other stellar populations, we performed an elemental abundance analysis of alpha- (O, Mg, Si, Ca, and Ti) and Z-odd (Na and Al) elements of red giant stars in the bulge as well as of local thin disk, thick disk and halo giants. Methods. We use high-resolution optical spectra of 25 bulge giants in Baade's window and 55 comparison giants (4 halo, 29 thin disk and 22 thick disk giants) in the solar neighborhood. All stars have similar stellar parameters but cover a broad range in metallicity (-1.5 < [Fe/H] < +0.5). A standard 1D local thermodynamic equilibrium analysis using both Kurucz and MARCS models yielded the abundances of O, Na, Mg, Al, Si, Ca, Ti and Fe. Our homogeneous and differential analysis of the Galactic stellar populations ensured that systematic errors were minimized. Results. We confirm the well-established differences for [alpha/Fe] at a given metallicity between the local thin and thick disks. For all the elements investigated, we find no chemical distinction between the bulge and the local thick disk, in agreement with our previous study of C, N and O but in contrast to other groups relying on literature values for nearby disk dwarf stars. For -1.5 < [Fe/H] < -0.3 exactly the same trend is followed by both the bulge and thick disk stars, with a star-to-star scatter of only 0.03 dex. Furthermore, both populations share the location of the knee in the [alpha/Fe] vs. [Fe/H] diagram. It still remains to be confirmed that the local thick disk extends to super-solar metallicities as is the case for the bulge. These are the most stringent constraints to date on the chemical similarity of these stellar populations. Conclusions. Our findings suggest that the bulge and local thick disk stars experienced similar formation timescales, star formation rates and initial mass functions, confirming thus the main outcomes of our previous homogeneous analysis of [O/Fe] from infrared spectra for nearly the same sample. The identical a-enhancements of thick disk and bulge stars may reflect a rapid chemical evolution taking place before the bulge and thick disk structures we see today were formed, or it may reflect Galactic orbital migration of inner disk/bulge stars resulting in stars in the solar neighborhood with thick-disk kinematics.
Resumo:
We have investigated the electronic and transport properties of zigzag Ni-adsorbed graphene nanoribbons (Ni/GNRs) using ab initio calculations. We find that the Ni adatoms lying along the edge of zigzag GNRs represent the energetically most stable configuration, with an energy difference of approximately 0.3 eV when compared to the adsorption in the middle of the ribbon. The carbon atoms at the ribbon edges still present nonzero magnetic moments as in the pristine GNR even though there is a quenching by a factor of almost five in the value of the local magnetic moments at the C atoms bonded to the Ni. This quenching decays relatively fast and at approximately 9 A from the Ni adsorption site the magnetic moments have already values close to the pristine ribbon. At the opposite edge and at the central carbon atoms the changes in the magnetic moments are negligible. The energetic preference for the antiparallel alignment between the magnetization at the opposite edges of the ribbon is still maintained upon Ni adsorption. We find many Ni d-related states within an energy window of 1 eV above and below the Fermi energy, which gives rise to a spin-dependent charge transport. These results suggest the possibility of manufacturing spin devices based on GNRs doped with Ni atoms.