862 resultados para point of interest (POI)
Resumo:
Signal processing is an important topic in technological research today. In the areas of nonlinear dynamics search, the endeavor to control or order chaos is an issue that has received increasing attention over the last few years. Increasing interest in neural networks composed of simple processing elements (neurons) has led to widespread use of such networks to control dynamic systems learning. This paper presents backpropagation-based neural network architecture that can be used as a controller to stabilize unsteady periodic orbits. It also presents a neural network-based method for transferring the dynamics among attractors, leading to more efficient system control. The procedure can be applied to every point of the basin, no matter how far away from the attractor they are. Finally, this paper shows how two mixed chaotic signals can be controlled using a backpropagation neural network as a filter to separate and control both signals at the same time. The neural network provides more effective control, overcoming the problems that arise with control feedback methods. Control is more effective because it can be applied to the system at any point, even if it is moving away from the target state, which prevents waiting times. Also control can be applied even if there is little information about the system and remains stable longer even in the presence of random dynamic noise.
Resumo:
2000 Mathematics Subject Classification: 62J12, 62F35
Resumo:
Advertising and other forms of communications are often used by government bodies, non-government organisations, and other institutions to try to influence the population to either a) reduce some form of harmful behaviour (e.g. smoking, drunk- driving) or b) increase some more healthy behaviour (e.g. eating healthily). It is common for these messages to be predicated on the chances of some negative event occurring if the individual does not either a) stop the harmful behaviour, or b) start / increase the healthy behaviour. This design of communication is referred to by many names in the relevant literature, but for the purposes of this thesis, will be termed a ‘threat appeal’. Despite their widespread use in the public sphere, and concerted academic interest since the 1950s, the effectiveness of threat appeals in delivering their objective remains unclear in many ways. In a detailed, chronological and thematic examination of the literature, two assumptions are uncovered that have either been upheld despite little evidence to support them, or received limited attention at all, in the literature. Specifically, a) that threat appeal characteristics can be conflated with their intended responses, and b) that a threat appeal always and necessarily evokes a fear response in the subject. A detailed examination of these assumptions underpins this thesis. The intention is to take as a point of departure the equivocality of empirical results, and deliver a novel approach with the objective of reducing the confusion that is evident in existing work. More specifically, the present thesis frames cognitive and emotional responses to threat appeals as part of a decision about future behaviour. To further develop theory, a conceptual framework is presented that outlines the role of anticipated and anticipatory emotions, alongside subjective probabilities, elaboration and immediate visceral emotions, resultant from manipulation of the intrinsic message characteristics of a threat appeal (namely, message direction, message frame and graphic image). In doing so, the spectrum of relevant literature is surveyed, and used to develop a theoretical model which serves to integrate key strands of theory into a coherent model. In particular, the emotional and cognitive responses to the threat appeal manipulations are hypothesised to influence behaviour intentions and expectations pertaining to future behaviour. Using data from a randomised experiment with a sample of 681 participants, the conceptual model was tested using analysis of covariance. The results for the conceptual framework were encouraging overall, and also with regard to the individual hypotheses. In particular, empirical results showed clearly that emotional responses to the intrinsic message characteristics are not restricted to fear, and that different responses to threat appeals were clearly attributed to specific intrinsic message characteristics. In addition, the inclusion of anticipated emotions alongside cognitive appraisals in the framework generated interesting results. Specifically, immediate emotions did not influence key response variables related to future behaviour, in support of questioning the assumption of the prominent role of fear in the response process that is so prevalent in existing literature. The findings, theoretical and practical implications, limitations and directions for future research are discussed.
Resumo:
While issues relating to the development, legitimacy and accountability of the European Police Office, Europol, have been intensively discussed in political and academic circles, the actual impact of Europol on policy-making in the European Union has yet to receive scholarly attention. By investigating the evolution and the role of Europol's organized crime reports, this article elaborates on whether Europol has been able to exert an influence beyond its narrowly defined mandate. Theoretically informed by the assumptions of experimentalist governance, the article argues that the different legal systems and policing traditions of EU member states have made it difficult for the EU to agree on a common understanding on how to fight against organized crime. This lack of consensus, which has translated into a set of vague and broadly formulated framework goals and guidelines, has enabled Europol to position its Organized Crime Threat Assessments as the point of reference in the respective EU policy-making area. Europol's interest in improving its institutional standing thereby converged with the interest of different member states to use Europol as a socialization platform to broadcast their ideas and to ‘Europeanize’ their national counter-organized crime policy.
Resumo:
Engineering education in the United Kingdom is at the point of embarking upon an interesting journey into uncharted waters. At no point in the past have there been so many drivers for change and so many opportunities for the development of engineering pedagogy. This paper will look at how Engineering Education Research (EER) has developed within the UK and what differentiates it from the many small scale practitioner interventions, perhaps without a clear research question or with little evaluation, which are presented at numerous staff development sessions, workshops and conferences. From this position some examples of current projects will be described, outcomes of funding opportunities will be summarised and the benefits of collaboration with other disciplines illustrated. In this study, I will account for how the design of task structure according to variation theory, as well as the probe-ware technology, make the laws of force and motion visible and learnable and, especially, in the lab studied make Newton's third law visible and learnable. I will also, as a comparison, include data from a mechanics lab that use the same probe-ware technology and deal with the same topics in mechanics, but uses a differently designed task structure. I will argue that the lower achievements on the FMCE-test in this latter case can be attributed to these differences in task structure in the lab instructions. According to my analysis, the necessary pattern of variation is not included in the design. I will also present a microanalysis of 15 hours collected from engineering students' activities in a lab about impulse and collisions based on video recordings of student's activities in a lab about impulse and collisions. The important object of learning in this lab is the development of an understanding of Newton's third law. The approach analysing students interaction using video data is inspired by ethnomethodology and conversation analysis, i.e. I will focus on students practical, contingent and embodied inquiry in the setting of the lab. I argue that my result corroborates variation theory and show this theory can be used as a 'tool' for designing labs as well as for analysing labs and lab instructions. Thus my results have implications outside the domain of this study and have implications for understanding critical features for student learning in labs. Engineering higher education is well used to change. As technology develops the abilities expected by employers of graduates expand, yet our understanding of how to make informed decisions about learning and teaching strategies does not without a conscious effort to do so. With the numerous demands of academic life, we often fail to acknowledge our incomplete understanding of how our students learn within our discipline. The journey facing engineering education in the UK is being driven by two classes of driver. Firstly there are those which we have been working to expand our understanding of, such as retention and employability, and secondly the new challenges such as substantial changes to funding systems allied with an increase in student expectations. Only through continued research can priorities be identified, addressed and a coherent and strong voice for informed change be heard within the wider engineering education community. This new position makes it even more important that through EER we acquire the knowledge and understanding needed to make informed decisions regarding approaches to teaching, curriculum design and measures to promote effective student learning. This then raises the question 'how does EER function within a diverse academic community?' Within an existing community of academics interested in taking meaningful steps towards understanding the ongoing challenges of engineering education a Special Interest Group (SIG) has formed in the UK. The formation of this group has itself been part of the rapidly changing environment through its facilitation by the Higher Education Academy's Engineering Subject Centre, an entity which through the Academy's current restructuring will no longer exist as a discrete Centre dedicated to supporting engineering academics. The aims of this group, the activities it is currently undertaking and how it expects to network and collaborate with the global EER community will be reported in this paper. This will include explanation of how the group has identified barriers to the progress of EER and how it is seeking, through a series of activities, to facilitate recognition and growth of EER both within the UK and with our valued international colleagues.
Resumo:
Ennek a cikknek az a célja, hogy áttekintést adjon annak a folyamatnak néhány főbb állomásáról, amit Black, Scholes és Merton opcióárazásról írt cikkei indítottak el a 70-es évek elején, és ami egyszerre forradalmasította a fejlett nyugati pénzügyi piacokat és a pénzügyi elméletet. / === / This review article compares the development of financial theory within and outside Hungary in the last three decades starting with the Black-Scholes revolution. Problems like the term structure of interest rate volatilities which is in the focus of many research internationally has not received the proper attention among the Hungarian economists. The article gives an overview of no-arbitrage pricing, the partial differential equation approach and the related numerical techniques, like the lattice methods in pricing financial derivatives. The relevant concepts of the martingal approach are overviewed. There is a special focus on the HJM framework of the interest rate development. The idea that the volatility and the correlation can be traded is a new horizon to the Hungarian capital market.
Resumo:
A csomagolás részét képezi a jelölés – vagy más néven címke, label –, aminek elsődleges funkciója a termék tulajdonságairól való tájékoztatás, amellett, hogy a vállalat és a fogyasztó egyik legfontosabb találkozási pontja. Kiemelt szerepe van a marketing és a vállalati menedzsment eszköztárában, hiszen a fogyasztói döntéshozatal meghatározó forrása. A szerző írásában a jelölések definícióját, fajtáit és csoportosítását tárja az olvasó elé, majd ismerteti jelentőségét, fontosságát és szerepét az élelmiszer-ipari termékek segítségével. Ezután egy 630 fős megkérdezés eredményeképp a sokdimenziós skálázás (MDS) módszerével a jelölések új értelmezését mutatja be: a jelöléseket három dimenzió mentén lehet elhelyezni (előzetes tudás, érdek, megbízhatóság), valamint ezenkívül a jelölések öt homogén csoportot alkotnak (klasszikus, diétás, funkcionális, tudatos, előállítási). A téma jelentőségét az egészség és a környezet iránti növekvő érdeklődés, valamint a változó jogszabályi környezet is alátámasztja. / === / Signs, labels, claims are to inform consumers of product attributes, and are part of the packaging. Labeling is one of the most important marketing and management tool, while purchase decision is made at the point of purchase. The aim of this paper is to present the basic definitions and elements of information content on food packaging. The author developed a new approach to examine labeling using multidimensional scaling as a result of a pilot study. Labels are to distinguish through three dimensions: precognition, interest and reliability. Beyond that labels can be sorted to five homogeneous clusters based on classic, dietary, functional, conscious and production attributes. The relevancy of labeling is supported by growing interest of health and environmental issues and changing law environment.
Resumo:
A pénzügy kutatócsoport a TÁMOP-4.2.1.B-09/1/KMR-2010-0005 azonosítójú projektjében igen szerteágazó elemzési munkát végzett. Rámutattunk, hogy a különböző szintű gazdasági szereplők megnövekedett tőkeáttétele egyértelműen a rendszerkockázat növekedéséhez vezet, hiszen nő az egyes szereplők csődjének valószínűsége. Ha a tőkeáttételt eltérő mértékben és ütemben korlátozzák az egyes szektorokban, országokban akkor a korlátozást később bevezető szereplők egyértelműen versenyelőnyhöz jutnak. Az egyes pénzügyi intézmények tőkeallokációját vizsgálva kimutattuk, hogy a különféle divíziók közt mindig lehetséges a működés fedezetésül szolgáló tőkét (kockázatot) úgy felosztani, hogy a megállapodás felmondás egyik érintettnek se álljon érdekében. Ezt azonban nem lehet minden szempontból igazságosan megtenni, így egyes üzletágak versenyhátrányba kerülhetnek, ha a konkurens piaci szereplők az adott tevékenységet kevésbé igazságtalanul terhelték meg. Kimutattunk, hogy az egyes nyugdíjpénztárak befektetési tevékenységének eredményességére nagy hatással van a magánnyugdíjpénztárak tevékenységének szabályozása. Ezek a jogszabályok a társadalom hosszú távú versenyképességére vannak hatással. Rámutattunk arra is, hogy a gazdasági válság előtt a hazai bankok sem voltak képesek ügyfeleik kockázatviselő képességét helyesen megítélni, ráadásul jutalékrendszerük nem is tette ebben érdekelté azokat. Számos vizsgálatunk foglalkozott a magyar vállalatok versenyképességének alakulásával is. Megvizsgáltuk a különféle adónemek, árfolyamkockázatok és finanszírozási politikák versenyképességet befolyásoló hatását. Külön kutatás vizsgálta a kamatlábak ingadozásának és az hitelekhez kapcsolódó eszközfedezet meglétének vállalati értékre gyakorolt hatásait. Rámutattunk a nemfizetés növekvő kockázatára, és áttekintettük a lehetséges és a ténylegesen alkalmazott kezelési stratégiákat is. Megvizsgáltuk azt is, hogy a tőzsdei cégek tulajdonosai miként használják ki az osztalékfizetéshez kapcsolódó adóoptimalizálási lehetőségeket. Gyakorlati piaci tapasztalataik alapján az adóelkerülő kereskedést a befektetők a részvények egy jelentős részénél végrehajtják. Külön kutatás foglakozott a szellemi tőke hazai vállalatoknál játszott szerepéről. Ez alapján a cégek a problémát 2009-ben lényegesen magasabb szakértelemmel kezelték, mint öt esztendővel korábban. Rámutattunk arra is, hogy a tulajdonosi háttér lényeges hatást gyakorolhat arra, ahogyan a cégek célrendszerüket felépítik, illetve ahogy az intellektuális javakra tekintenek. _____ The Finance research team has covered a wide range of research fields while taking part at project TÁMOP-4.2.1.B-09/1/KMR-2010-0005. It has been shown that the increasing financial gearing at the different economic actors clearly leads to growth in systematic risk as the probability of bankruptcy climbs upwards. Once the leverage is limited at different levels and at different points in time for the different sectors, countries introducing the limitations later gain clearly a competitive advantage. When investigating the leverage at financial institutions we found that the capital requirement of the operation can always be divided among divisions so that none of them would be better of with cancelling the cooperation. But this cannot be always done fairly from all point of view meaning some of the divisions may face a competitive disadvantage if competitors charge their similar division less unfairly. Research has also shown that the regulation of private pension funds has vital effect on the profitability of the investment activity of the funds. These laws and regulations do not only affect the funds themselves but also the competitiveness of the whole society. We have also fund that Hungarian banks were unable to estimate correctly the risk taking ability of their clients before the economic crisis. On the top of that the bank were not even interested in that due to their commission based income model. We also carried out several research on the competitiveness of the Hungarian firms. The effect of taxes, currency rate risks, and financing policies on competitiveness has been analysed in detail. A separate research project was dedicated to the effect of the interest rate volatility and asset collaterals linked to debts on the value of the firm. The increasing risk of non-payment has been underlined and we also reviewed the adequate management strategies potentially available and used in real life. We also investigated how the shareholders of listed companies use the tax optimising possibilities linked to dividend payments. Based on our findings on the Hungarian markets the owners perform the tax evading trades in case of the most shares. A separate research has been carried out on the role played by intellectual capital. After that the Hungarian companies dealt with the problem in 2009 with far higher proficiency than five years earlier. We also pointed out that the ownership structure has a considerable influence on how firms structure their aims and view their intangible assets.
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^
Resumo:
The goal of this research was to determine the composition of boron deposits produced by pyrolysis of boron tribromide, and to use the results to (a) determine the experimental conditions (reaction temperature, etc.) necessary to produce alpha-rhombohedral boron and (b) guide the development/refinement of the pyrolysis experiments such that large, high purity crystals of alpha-rhombohedral boron can be produced with consistency. Developing a method for producing large, high purity alpha-rhombohedral boron crystals is of interest because such crystals could potentially be used to achieve an alpha-rhombohedral boron based neutron detector design (a solid-state detector) that could serve as an alternative to existing neutron detector technologies. The supply of neutron detectors in the United States has been hampered for a number of years due to the current shortage of helium-3 (a gas used in many existing neutron detector technologies); the development of alternative neutron detector technology such as an alpha-rhombohedral boron based detector would help provide a more sustainable supply of neutron detectors in this country. In addition, the prospect/concept of an alpha-rhombohedral boron based neutron detector is attractive because it offers the possibility of achieving a design that is smaller, longer life, less power consuming, and potentially more sensitive than existing neutron detectors. The main difficulty associated with creating an alpha-rhombohedral boron based neutron detector is that producing large, high purity crystals of alpha-rhombohedral boron is extremely challenging. Past researchers have successfully made alpha-rhombohedral boron via a number of methods, but no one has developed a method for consistently producing large, high purity crystals. Alpha-rhombohedral boron is difficult to make because it is only stable at temperatures below around 1100-1200 °C, its formation is very sensitive to impurities, and the conditions necessary for its formation are not fully understood or agreed upon in the literature. In this research, the method of pyrolysis of boron tribromide (hydrogen reduction of boron tribromide) was used to deposit boron on a tantalum filament. The goal was to refine this method, or potentially use it in combination with a second method (amorphous boron crystallization), to the point where it is possible to grow large, high purity alpha-rhombohedral boron crystals with consistency. A pyrolysis apparatus was designed and built, and a number of trials were run to determine the conditions (reaction temperature, etc.) necessary for alpha-rhombohedral boron production. This work was focused on the x-ray diffraction analysis of the boron deposits; x-ray diffraction was performed on a number of samples to determine the types of boron (and other compounds) formed in each trial and to guide the choices of test conditions for subsequent trials. It was found that at low reaction temperatures (in the range of around 830-950 °C), amorphous boron was the primary form of boron produced. Reaction temperatures in the range of around 950-1000 °C yielded various combinations of crystalline boron and amorphous boron. In the first trial performed at a temperature of 950 °C, a mix of amorphous boron and alpha-rhombohedral boron was formed. Using a scanning electron microscope, it was possible to see small alpha-rhombohedral boron crystals (on the order of ~1 micron in size) embedded in the surface of the deposit. In subsequent trials carried out at reaction temperatures in the range of 950 °C – 1000 °C, it was found that various combinations of alpha-rhombohedral boron, beta-rhombohedral boron, and amorphous boron were produced; the results tended to be unpredictable (alpha-rhombohedral boron was not produced in every trial), and the factors leading to success/failure were difficult to pinpoint. These results illustrate how sensitive of a process producing alpha-rhombohedral boron can be, and indicate that further improvements to the test apparatus and test conditions (for example, higher purity/cleanliness) may be necessary to optimize the boron deposition. Although alpha-rhombohedral boron crystals of large size were not achieved, this research was successful in (a) developing a pyrolysis apparatus and test procedure that can serve as a platform for future testing, (b) determining reaction temperatures at which alpha-rhombohedral boron can form, and (c) developing a consistent process for analyzing the boron deposits and determining their composition. Further experimentation is necessary to achieve a pyrolysis apparatus and test procedure that can yield large alpha-rhombohedral boron crystals with consistency.
Resumo:
The different oxidation states of chromium allow its bulk oxide form to be reducible, facilitating the oxygen vacancy formation process, which is a key property in applications such as catalysis. Similar to other useful oxides such as TiO2, and CeO2, the effect of substitutional metal dopants in bulk Cr2O3 and its effect on the electronic structure and oxygen vacancy formation are of interest, particularly in enhancing the latter. In this paper, density functional theory (DFT) calculations with a Hubbard + U correction (DFT+U) applied to the Cr 3d and O 2p states, are carried out on pure and metal-doped bulk Cr2O3 to examine the effect of doping on the electronic and geometric structure. The role of dopants in enhancing the reducibility of Cr2O3 is examined to promote oxygen vacancy formation. The dopants are Mg, Cu, Ni, and Zn, which have a formal +2 oxidation state in their bulk oxides. Given this difference in host and, dopant oxidation states, we show that to predict the correct ground state two metal dopants charge compensated with an oxygen vacancy are required. The second oxygen atom removed is termed "the active" oxygen vacancy and it is the energy required to remove this atom that is related to the reduction process. In all cases, we find that substitutional doping improves the oxygen vacancy formation of bulk Cr2O3 by lowering the energy cost.
Resumo:
Based on an original and comprehensive database of all feature fiction films produced in Mercosur between 2004 and 2012, the paper analyses whether the Mercosur film industry has evolved towards an integrated and culturally more diverse market. It provides a summary of policy opportunities in terms of integration and diversity, emphasizing the limiter role played by regional policies. It then shows that although the Mercosur film industry remains rather disintegrated, it tends to become more integrated and culturally more diverse. From a methodological point of view, the combination of Social Network Analysis and the Stirling Model opens up interesting research tracks to analyse creative industries in terms of their market integration and their cultural diversity.
Resumo:
There has been plenty of debate in the academic literature about the nature of the common good or public interest in planning. There is a recognition that the idea is one that is extremely difficult to isolate in practical terms; nevertheless, scholars insist that the idea ‘…remains the pivot around which debates about the nature of planning and its purposes turn’ (Campbell & Marshall, 2002, 163–64). At the point of first principles, these debates have broached political theories of the state and even philosophies of science that inform critiques of rationality, social justice and power. In the planning arena specifically, much of the scholarship has tended to focus on theorising the move from a rational comprehensive planning system in the 1960s and 1970s, to one that is now dominated by deliberative democracy in the form of collaborative planning. In theoretical terms, this debate has been framed by a movement from what are perceived as objective and elitist notions of planning practice and decision-making to ones that are considered (by some) to be ‘inter-subjective’ and non-elitist. Yet despite significant conceptual debate, only a small number of empirical studies have tackled the issue by investigating notions of the common good from the perspective of planning practitioners. What do practitioners understand by the idea of the common good in planning? Do they actively consider it when making planning decisions? Do governance/institutional barriers exist to pursuing the common good in planning? In this paper, these sorts of questions are addressed using the case of Ireland. The methodology consists of a series of semi-structured qualitative interviews with 20 urban planners working across four planning authorities within the Greater Dublin Area, Ireland. The findings show that the most frequently cited definition of the common good is balancing different competing interests and avoiding/minimising the negative effects of development. The results show that practitioner views of the common good are far removed from the lofty ideals of planning theory and reflect the ideological shift of planners within an institution that has been heavily neoliberalised since the 1970s.
Resumo:
The forces surrounding the emerging economies of underdeveloped world, especially Africa has practically stifled its economic progress, growth, development and sustainability. This economic condition brings to the fore the massive onslaught of rural/urban poverty which the African continent grapples with since the post-world war II era to date. The economic misfortunes and incidence of mass poverty in Africa, vis-à-vis Nigeria is used as a point of departure in this study. The paper underscores the ideological and philosophical undertone of international capital manifesting in form of colonialism and imperialism as a major character in the historical process of underdevelopment and mass poverty in peripheral states of Africa, Asia and Latin America, respectively. Of particular interest in this study is the activities of domestic bourgeoisie elite class who have vigorously displayed some degree of lack of much needed vision and abject lack of desires to draw up workable plans to redeem the battered image of African/ Nigerian economic misfortunes. This state of affairs has practically engendered economic underdevelopment, misery and disturbing levels of poverty in the nation-state system. The paper concludes with the forward towards realizing the vision 20-20-20 objectives in the 21t century and beyond.