791 resultados para local-global principle
Resumo:
Hodiernamente às finanças públicas e municipais portuguesas enfrentam grandes desafios. Os municípios são confrontados com fortes restrições de financiamento, mormente das suas fontes tradicionais; dos impostos, das transferências do Estado e do empréstimo bancário, incrementando assim a deterioração da economia municipal. A crise financeira a nível global veio demostrar a necessidade de se diversificar aquelas fontes, demostrando as fragilidades dos sectores públicos e privado da economia portuguesa, agravado com da exiguidade do crédito bancário concedido para o investimento em projectos de desenvolvimento. Acreditamos que se perde uma grande oportunidade de se combater esta crise ao não fazermos devido uso, do potencial que o mercado de capitais interno nos oferece, com vista a financiar projectos de longo prazo. Pensamos que há espaço para que se adopte o uso de obrigações municipais, não como alternativa mas como fonte complementar ao financiamento local.
Resumo:
Effective policies combating global warming and incentivising reduction of greenhouse gases face fundamental collective action problems. States defending short term interests avoid international commitments and seek to benefit from measures combating global warming taken elsewhere. The paper explores the potential of Common Concern as an emerging principle of international law, in particular international environmental law, in addressing collective action problems and the global commons. It expounds the contours of the principle, its relationship to common heritage of mankind, to shared and differentiated responsibility and to public goods. It explores its potential to provide the foundations not only for international cooperation, but also to justify, and delimitate at the same time, unilateral action at home and deploying extraterritorial effects in addressing the challenges of global warming and climate change mitigation. As unilateral measures mainly translate into measures of trade policy, the principle of Common Concern is inherently linked and limited by existing legal disciplines in particular of the law of the World Trade Organization.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
In order to be relevant and useful in a fragmented developing country context, community and regional planning needs to shift away from the use of rigid tools to more flexible, adaptive approaches. An international review of planning curricula indicated a widespread consensus with respect to key competencies required of planners. This understanding was used in the development of new teaching programs at three Sri Lankan universities. Complementing the technical core knowledge areas, strong emphases on problem structuring, critical and strategic thinking, and the understanding of the political and institutional contexts appear to be crucial to making the agenda of planning for sustainable development more than a fashionable cliche. In order for these core areas to have relevance in a developing country context, however, planning curricula need to achieve a balance between local priorities and a global perspective.
Resumo:
An X-ray visualization technique has been used for the quantitative determination of local liquid holdups distribution and liquid holdup hysteresis in a nonwetting two-dimensional (2-D) packed bed. A medical diagnostic X-ray unit has been used to image the local holdups in a 2-D cold model having a random packing of expanded polystyrene beads. An aqueous barium chloride solution was used as a fluid to achieve good contrast on X-ray images. To quantify the local liquid holdup, a simple calibration technique has been developed that can be used for most of the radiological methods such as gamma ray and neutron radiography. The global value of total liquid holdup, obtained by X-ray method, has been compared with two conventional methods: drainage and tracer response. The X-ray technique, after validation, has been used to visualize and quantify, the liquid hysteresis phenomena in a packed bed. The liquid flows in preferred paths or channels that carry droplets/rivulets of increasing size and number as the liquid flow rate is increased. When the flow is reduced, these paths are retained and the higher liquid holdup that persists in these regions leads to the holdup hysteresis effect. Holdup in some regions of the packed bed may be an order of magnitude higher than average at a particular flow rate. (c) 2005 American Institute of Chemical Engineers
Resumo:
We investigate the structure of the positive solution set for nonlinear three-point boundary value problems of the form u('') + h(t) f(u) = 0, u(0) = 0, u(1) = lambdau(eta), where eta epsilon (0, 1) is given lambda epsilon (0, 1/n) is a parameter, f epsilon C ([0, infinity), [0, infinity)) satisfies f (s) > 0 for s > 0, and h epsilon C([0, 1], [0, infinity)) is not identically zero on any subinterval of [0, 1]. Our main results demonstrate the existence of continua of positive solutions of the above problem. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Australia ’s media policy agenda has recently been dominated by debate over two key issues: media ownership reform, and the local content provisions of the Australia–United States Free Trade Agreement. Challenging the tendency to analyse these issues separately, the article considers them as interlinked indicators of fundamental shifts occurring in the digital media environment. Converged media corporations increasingly seek to achieve economies of scale through ‘content streaming’: multi-purposing proprietary content across numerous digitally enabled platforms. This has resulted in rivalries for control of delivery technologies (as witnessed in media ownership debates) as well as over market access for corporate content (in the case of local content debates). The article contextualises Australia’s contemporary media policy flashpoints within international developments and longer-term industry strategising. It further questions the power of media policy as it is currently conceived to deal adequately with the challenges raised by a converging digital media marketplace.
Resumo:
Elevated ocean temperatures can cause coral bleaching, the loss of colour from reef-building corals because of a breakdown of the symbiosis with the dinoflagellate Symbiodinium. Recent studies have warned that global climate change could increase the frequency of coral bleaching and threaten the long-term viability of coral reefs. These assertions are based on projecting the coarse output from atmosphere-ocean general circulation models (GCMs) to the local conditions around representative coral reefs. Here, we conduct the first comprehensive global assessment of coral bleaching under climate change by adapting the NOAA Coral Reef Watch bleaching prediction method to the output of a low- and high-climate sensitivity GCM. First, we develop and test algorithms for predicting mass coral bleaching with GCM-resolution sea surface temperatures for thousands of coral reefs, using a global coral reef map and 1985-2002 bleaching prediction data. We then use the algorithms to determine the frequency of coral bleaching and required thermal adaptation by corals and their endosymbionts under two different emissions scenarios. The results indicate that bleaching could become an annual or biannual event for the vast majority of the world's coral reefs in the next 30-50 years without an increase in thermal tolerance of 0.2-1.0 degrees C per decade. The geographic variability in required thermal adaptation found in each model and emissions scenario suggests that coral reefs in some regions, like Micronesia and western Polynesia, may be particularly vulnerable to climate change. Advances in modelling and monitoring will refine the forecast for individual reefs, but this assessment concludes that the global prognosis is unlikely to change without an accelerated effort to stabilize atmospheric greenhouse gas concentrations.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Policy and social work practice currently lack a theoretical framework that adequately explains the emergence, diffusion, and continuance of the intercountry adoption (ICA) phenomenon. Using South Korea as a case study and the application of actor network theory to the ICA phenomenon, this paper introduces a theoretical approach that allows an examination of the complex interrelationships between the global and local influences of a country's engagement in ICA. This theoretical approach provides a different way of understanding the phenomenon, which, in turn, can better inform policies and practice that affect children and families across the globe.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
Este estudo analisa a mídia impressa local produzida no lado brasileiro da Tríplice Fronteira, região limítrofe entre Brasil, Paraguai e Argentina e toma como exemplar o jornal brasileiro A Gazeta do Iguaçu, editado na cidade de Foz do Iguaçu, PR, que compõe a área trinacional. Os principais objetivos são analisar as características desta mídia local, a incidência de notícias sobre a fronteira, o relacionamento daquele veículo com as comunidades e com os grupos humanos que vivem na região, bem como o grau de influência e o nível de comprometimento do jornal com a política de Foz do Iguaçu. A base teóricometodológica constituiu-se de bibliografia sobre a questão global- local, dos estudos sobre multiculturalismo e diversidade cultural (estudos culturais), e dos questionamentos acerca de comunicação local e regional. As técnicas utilizadas foram, além da revisão bibliográfica, a análise de conteúdo do jornal A Gazeta do Iguaçu e entrevistas semi-estruturadas. Dentre as conclusões encontradas, verificou-se que a cobertura da fronteira é uma pauta prioritária, mas que ao mesmo tempo, perde espaço para o jornalismo de opinião, através das diversas colunas de opinião e sociais, que ocupam a maior parte do noticiário cotidiano local.(AU)
Resumo:
Solving many scientific problems requires effective regression and/or classification models for large high-dimensional datasets. Experts from these problem domains (e.g. biologists, chemists, financial analysts) have insights into the domain which can be helpful in developing powerful models but they need a modelling framework that helps them to use these insights. Data visualisation is an effective technique for presenting data and requiring feedback from the experts. A single global regression model can rarely capture the full behavioural variability of a huge multi-dimensional dataset. Instead, local regression models, each focused on a separate area of input space, often work better since the behaviour of different areas may vary. Classical local models such as Mixture of Experts segment the input space automatically, which is not always effective and it also lacks involvement of the domain experts to guide a meaningful segmentation of the input space. In this paper we addresses this issue by allowing domain experts to interactively segment the input space using data visualisation. The segmentation output obtained is then further used to develop effective local regression models.