32 resultados para Problem resolution
em Helda - Digital Repository of University of Helsinki
Resumo:
Burnt area mapping in humid tropical insular Southeast Asia using medium resolution (250-500m) satellite imagery is characterized by persisting cloud cover, wide range of land cover types, vast amount of wetland areas and highly varying fire regimes. The objective of this study was to deepen understanding of three major aspects affecting the implementation and limits of medium resolution burnt area mapping in insular Southeast Asia: 1) fire-induced spectral changes, 2) most suitable multitemporal compositing methods and 3) burn scars patterns and size distribution. The results revealed a high variation in fire-induced spectral changes depending on the pre-fire greenness of burnt area. It was concluded that this variation needs to be taken into account in change detection based burnt area mapping algorithms in order to maximize the potential of medium resolution satellite data. Minimum near infrared (MODIS band 2, 0.86μm) compositing method was found to be the most suitable for burnt area mapping purposes using Moderate Resolution Imaging Spectroradiometer (MODIS) data. In general, medium resolution burnt area mapping was found to be usable in the wetlands of insular Southeast Asia, whereas in other areas the usability was seriously jeopardized by the small size of burn scars. The suitability of medium resolution data for burnt area mapping in wetlands is important since recently Southeast Asian wetlands have become a major point of interest in many fields of science due to yearly occurring wild fires that not only degrade these unique ecosystems but also create regional haze problem and release globally significant amounts of carbon into the atmosphere due to burning peat. Finally, super-resolution MODIS images were tested but the test failed to improve the detection of small scars. Therefore, super-resolution technique was not considered to be applicable to regional level burnt area mapping in insular Southeast Asia.
Resumo:
This thesis proposes that national or ethnic identity is an important and overlooked resource in conflict resolution. Usually ethnic identity is seen both in international relations and in social psychology as something that fuels the conflict. Using grounded theory to analyze data from interactive problem-solving workshops between Palestinians and Israelis a theory about the role of national identity in turning conflict into protracted conflict is developed. Drawing upon research from, among others, social identity theory, just world theory and prejudice it is argued that national identity is a prime candidate to provide the justification of a conflict party’s goals and the dehumanization of the other necessary to make a conflict protracted. It is not the nature of national identity itself that lets it perform this role but rather the ability to mobilize a constituency for social action (see Stürmer, Simon, Loewy, & Jörger, 2003). Reicher & Hopkins (1996) have demonstrated that national identity is constructed by political entrepreneurs to further their cause, even if this construction is not a conscious one. Data from interactive problem-solving workshops suggest that the possibility of conflict resolution is actually seen by participants as a direct threat of annihilation. Understanding the investment necessary to make conflict protracted this reaction seems plausible. The justification for ones actions provided by national identity makes the conflict an integral part of a conflict party’s identity. Conflict resolution, it is argued, is therefore a threat to the very core of the current national identity. This may explain why so many peace agreements have failed to provide the hoped for resolution of conflict. But if national identity is being used in a constructionist way to attain political goals, a political project of conflict resolution, if it is conscious of the constructionist process, needs to develop a national identity that is independent of conflict and therefore able to accommodate conflict resolution. From this understanding it becomes clear why national identity needs to change, i.e. be disarmed, if conflict resolution is to be successful. This process of disarmament is theorized to be similar to the process of creating and sustaining protracted conflict. What shape and function this change should have is explored from the understanding of the role of national identity in supporting conflict. Ideas how track-two diplomacy efforts, such as the interactive problem-solving workshop, could integrate a process by both conflict parties to disarm their respective identities are developed.
Resumo:
In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
This study analysed whether the land tenure insecurity problem has led to a decline in long-term land improvements (liming and phosphorus fertilization) under the Common Agricultural Policy (CAP) and Nordic production conditions in European Union (EU) countries such as Finland. The results suggests that under traditional cash lease contracts, which are encouraged by the existing land leasing regulations and agricultural subsidy programs, the land tenure insecurity problem on leased land reduces land improvements that have a long pay-back period. In particular, soil pH was found to be significantly lower on land cultivated under a lease contract compared to land owned by the farmers themselves. The results also indicate that land improvements could not be reversed by land markets, because land owners would otherwise have carried out land improvements even if not farming by themselves. To reveal the causality between land tenure and land improvements, the dynamic optimisation problem was solved by a stochastic dynamic programming routine with known parameters for one-period returns and transition equations. The model parameters represented Finnish soil quality and production conditions. The decision rules were solved for alternative likelihood scenarios over the continuation of the fixed-term lease contract. The results suggest that as the probability of non-renewal of the lease contract increases, farmers quickly reduce investments in irreversible land improvements and, thereafter, yields gradually decline. The simulations highlighted the observed trends of a decline in land improvements on land parcels that are cultivated under lease contracts. Land tenure has resulted in the neglect of land improvement in Finland. This study aimed to analyze whether these challenges could be resolved by a tax policy that encourages land sales. Using Finnish data, real estate tax and a temporal relaxation on the taxation of capital gains showed some potential for the restructuring of land ownership. Potential sellers who could not be revealed by traditional logit models were identified with the latent class approach. Those landowners with an intention to sell even without a policy change were sensitive to temporal relaxation in the taxation of capital gains. In the long term, productivity and especially productivity growth are necessary conditions for the survival of farms and the food industry in Finland. Technical progress was found to drive the increase in productivity. The scale had only a moderate effect and for the whole study period (1976–2006) the effect was close to zero. Total factor productivity (TFP) increased, depending on the model, by 0.6–1.7% per year. The results demonstrated that the increase in productivity was hindered by the policy changes introduced in 1995. It is also evidenced that the increase in land leasing is connected to these policy changes. Land institutions and land tenure questions are essential in agricultural and rural policies on all levels, from local to international. Land ownership and land titles are commonly tied to fundamental political, economic and social questions. A fair resolution calls for innovative and new solutions both on national and international levels. However, this seems to be a problem when considering the application of EU regulations to member states inheriting divergent landownership structures and farming cultures. The contribution of this study is in describing the consequences of fitting EU agricultural policy to Finnish agricultural land tenure conditions and heritage.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
The main focus of this study is the epilogue of 4QMMT (4QMiqsat Ma aseh ha-Torah), a text of obscure genre containing a halakhic section found in cave 4 at Qumran. In the official edition published in the series Discoveries of the Judaean Desert (DJD X), the extant document was divided by its editors, Elisha Qimron and John Strugnell, into three literary divisions: Section A) the calendar section representing a 364-day solar calendar, Section B) the halakhot, and Section C) an epilogue. The work begins with text critical inspection of the manuscripts containing text from the epilogue (mss 4Q397, 4Q398, and 4Q399). However, since the relationship of the epilogue to the other sections of the whole document 4QMMT is under investigation, the calendrical fragments (4Q327 and 4Q394 3-7, lines 1-3) and the halakhic section also receive some attention, albeit more limited and purpose oriented. In Ch. 2, after a transcription of the fragments of the epilogue, a synopsis is presented in order to evaluate the composite text of the DJD X edition in light of the evidence provided by the individual manuscripts. As a result, several critical comments are offered, and finally, an alternative arrangement of the fragments of the epilogue with an English translation. In the following chapter (Ch. 3), the diversity of the two main literary divisions, the halakhic section and the epilogue, is discussed, and it is demonstrated that the author(s) of 4QMMT adopted and adjusted the covenantal pattern known from biblical law collections, more specifically Deuteronomy. The question of the genre of 4QMMT is investigated in Ch. 4. The final chapter (Ch. 5) contains an analysis of the use of Scripture in the epilogue. In a close reading, both the explicit citations and the more subtle allusions are investigated in an attempt to trace the theology of the epilogue. The main emphases of the epilogue are covenantal faithfulness, repentance and return. The contents of the document reflect a grave concern for the purity of the cult in Jerusalem, and in the epilogue Deuteronomic language and expressions are used to convince the readers of the necessity of a reformation. The large number of late copies found in cave 4 at Qumran witness the significance of 4QMMT and the continuous importance of the Jerusalem Temple for the Qumran community.
Resumo:
This thesis aims at finding the role of deposit insurance scheme and central bank (CB) in keeping the banking system safe. The thesis also studies the factors associated with long-lasting banking crises. The first essay analyzes the effect of using explicit deposit insurance scheme (EDIS), instead of using implicit deposit insurance scheme (IDIS), on banking crises. The panel data for the period of 1980-2003 includes all countries for which the data on EDIS or IDIS exist. 70% of the countries in the sample are less developed countries (LDCs). About 55% of the countries adopting EDIS also come from LDCs. The major finding is that the using of EDIS increases the crisis probability at a strong significance level. This probability is greater if the EDIS is inefficiently designed allowing higher scope of moral hazard problem. Specifically, the probability is greater if the EDIS provides higher coverage to deposits and if it is less powerful from the legal point of view. This study also finds that the less developed a country is to handle EDIS, the higher the chance of banking crisis. Once the underdevelopment of an economy handling the EDIS is controlled, the EDIS separately is no longer a significant factor of banking crises. The second essay aims at determining whether a country s powerful CB can lessen the instability of the banking sector by minimizing the likelihood of a banking crisis. The data used include indicators of the CB s autonomy for a set of countries over the period of 1980-89. The study finds that in aggregate a more powerful CB lessens the probability of banking crisis. When the CB s authority is disentangled with respect to its responsibilities, the study finds that the longer tenure of CB s chief executive officer and the greater power of CB in assigning interest rate on government loans are necessary for reducing the probability of banking crisis. The study also finds that the probability of crisis reduces more if an autonomous CB can perform its duties in a country with stronger law and order tradition. The costs of long-lasting banking crises are high because both the depositors and the investors lose confidence in the banking system. For a rapid recovery of a crisis, the government very often undertakes one or more crisis resolution policy (CRP) measures. The third essay examines the CRP and other explanatory variables correlated with the durations of banking crises. The major finding is that the CRP measure allowing the regulation forbearance to keep the insolvent banks operative and the public debt relief program are respectively strongly and weakly significant to increase the durations of crises. Some other explanatory variables, which were found by previous studies to be related with the probability of crises to occur, are also correlated with the durations of crises.
Resumo:
Regional autonomy in Indonesia was initially introduced as a means of pacifying regional disappointment at the central government. Not only did the Regional Autonomy Law of 1999 give the Balinese a chance to express grievance regarding the centralist policies of the Jakarta government but also provided an opportunity to return to the regional, exclusive, traditional village governance (desa adat). As a result, the problems faced by the island, particularly ethnic conflicts, are increasingly handled by the mechanism of this traditional type of governance. Traditional village governance with regard to ethnic conflicts (occurring) between Balinese and migrants has never been systematically analyzed. Existing analyses emphasized only the social context, but do not explain either the cause of conflicts and the ensuing problems entails or the virtues of traditional village governance mechanisms for mediating in the conflict. While some accounts provide snapshots, they lack both theoretical and conflict study perspective. The primary aim of this dissertation is to explore the expression and the causes of conflict between the Balinese and migrants and to advance the potential of traditional village governance as a means of conflict resolution with particular reference to the municipality of Denpasar. One conclusion of the study is that the conflict between the Balinese and migrants has been expressed on the level of situation/contradiction, attitudes, and behavior. Yet the driving forces behind the conflict itself consist of the following factors: absence of cooperation; incompatible position and perception; inability to communicate effectively; and problem of inequality and injustice, which comes to the surface as a social, cultural, and economic problem. This complex of factors fuels collective fear for the future of both groups. The study concludes that traditional village governance mechanisms as a means of conflict resolution have not yet been able to provide an enduring resolution for the conflict. Analysis shows that the practice of traditional village governance is unable to provide satisfactory mechanisms for the conflict as prescribed by conflict resolution theory. Traditional village governance, which is derived from the exclusive Hindu-Balinese culture, is accepted as more legitimate among the Balinese than the official governance policies. However, it is not generally accepted by most of the Muslim migrants. In addition, traditional village governance lacks access to economic instruments, which weakens its capacity to tackle the economic roots of the conflict. Thus the traditional mechanisms of migrant ordinance , as practiced by the traditional village governance have not yet been successful in penetrating all aspects of the conflict. Finally, one of the main challenges for traditional village governance s legal development is the creation of a regional legal system capable of accommodating rapid changes in line with the national and international legal practices. The framing of the new laws should be responsive to the aspirations of a changing society. It should not only protect the various Balinese communities interests, but also that of other ethnic groups, especially those of the minority. In other words, the main challenge to traditional village governance is its ability to develop flexibility and inclusiveness.
Resumo:
MEG directly measures the neuronal events and has greater temporal resolution than fMRI, which has limited temporal resolution mainly due to the larger timescale of the hemodynamic response. On the other hand fMRI has advantages in spatial resolution, while the localization results with MEG can be ambiguous due to the non-uniqueness of the electromagnetic inverse problem. Thus, these methods could provide complementary information and could be used to create both spatially and temporally accurate models of brain function. We investigated the degree of overlap, revealed by the two imaging methods, in areas involved in sensory or motor processing in healthy subjects and neurosurgical patients. Furthermore, we used the spatial information from fMRI to construct a spatiotemporal model of the MEG data in order to investigate the sensorimotor system and to create a spatiotemporal model of its function. We compared the localization results from the MEG and fMRI with invasive electrophysiological cortical mapping. We used a recently introduced method, contextual clustering, for hypothesis testing of fMRI data and assessed the the effect of neighbourhood information use on the reproducibility of fMRI results. Using MEG, we identified the ipsilateral primary sensorimotor cortex (SMI) as a novel source area contributing to the somatosensory evoked fields (SEF) to median nerve stimulation. Using combined MEG and fMRI measurements we found that two separate areas in the lateral fissure may be the generators for the SEF responses from the secondary somatosensory cortex region. The two imaging methods indicated activation in corresponding locations. By using complementary information from MEG and fMRI we established a spatiotemporal model of somatosensory cortical processing. This spatiotemporal model of cerebral activity was in good agreement with results from several studies using invasive electrophysiological measurements and with anatomical studies in monkey and man concerning the connections between somatosensory areas. In neurosurgical patients, the MEG dipole model turned out to be more reliable than fMRI in the identification of the central sulcus. This was due to prominent activation in non-primary areas in fMRI, which in some cases led to erroneous or ambiguous localization of the central sulcus.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.