19 resultados para interval-valued fuzzy sets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The topic of my doctoral thesis is to demonstrate the usefulness of incorporating tonal and modal elements into a pitch-web square analysis of Béla Bartók's (1881-1945) opera, 'A kékszakállú herceg vára' ('Duke Bluebeard's Castle'). My specific goal is to demonstrate that different musical materials, which exist as foreground melodies or long-term key progressions, are unified by the unordered pitch set {0,1,4}, which becomes prominent in different sections of Bartók's opera. In Bluebeard's Castle, the set {0,1,4} is also found as a subset of several tetrachords: {0,1,4,7}, {0,1,4,8}, and {0,3,4,7}. My claim is that {0,1,4} serves to link music materials between themes, between sections, and also between scenes. This study develops an analytical method, drawn from various theoretical perspectives, for conceiving superposed diatonic spaces within a hybrid pitch-space comprised of diatonic and chromatic features. The integrity of diatonic melodic lines is retained, which allows for a non-reductive understanding of diatonic superposition, without appealing to pitch centers or specifying complete diatonic collections. Through combining various theoretical insights of the Hungarian scholar Ernő Lendvai, and the American theorists Elliott Antokoletz, Paul Wilson and Allen Forte, as well as the composer himself, this study gives a detailed analysis of the opera's pitch material in a way that combines, complements, and expands upon the studies of those scholars. The analyzed pitch sets are represented on Aarre Joutsenvirta's note-web square, which adds a new aspect to the field of Bartók analysis. Keywords: Bartók, Duke Bluebeard's Castle (Op. 11), Ernő Lendvai, axis system, Elliott Antokoletz, intervallic cycles, intervallic cells, Allen Forte, set theory, interval classes, interval vectors, Aarre Joutsenvirta, pitch-web square, pitch-web analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Failures in industrial organizations dealing with hazardous technologies can have widespread consequences for the safety of the workers and the general population. Psychology can have a major role in contributing to the safe and reliable operation of these technologies. Most current models of safety management in complex sociotechnical systems such as nuclear power plant maintenance are either non-contextual or based on an overly-rational image of an organization. Thus, they fail to grasp either the actual requirements of the work or the socially-constructed nature of the work in question. The general aim of the present study is to develop and test a methodology for contextual assessment of organizational culture in complex sociotechnical systems. This is done by demonstrating the findings that the application of the emerging methodology produces in the domain of maintenance of a nuclear power plant (NPP). The concepts of organizational culture and organizational core task (OCT) are operationalized and tested in the case studies. We argue that when the complexity of the work, technology and social environment is increased, the significance of the most implicit features of organizational culture as a means of coordinating the work and achieving safety and effectiveness of the activities also increases. For this reason a cultural perspective could provide additional insight into the problem of safety management. The present study aims to determine; (1) the elements of the organizational culture in complex sociotechnical systems; (2) the demands the maintenance task sets for the organizational culture; (3) how the current organizational culture at the case organizations supports the perception and fulfilment of the demands of the maintenance work; (4) the similarities and differences between the maintenance cultures at the case organizations, and (5) the necessary assessment of the organizational culture in complex sociotechnical systems. Three in-depth case studies were carried out at the maintenance units of three Nordic NPPs. The case studies employed an iterative and multimethod research strategy. The following methods were used: interviews, CULTURE-survey, seminars, document analysis and group work. Both cultural analysis and task modelling were carried out. The results indicate that organizational culture in complex sociotechnical systems can be characterised according to three qualitatively different elements: structure, internal integration and conceptions. All three of these elements of culture as well as their interrelations have to be considered in organizational assessments or important aspects of the organizational dynamics will be overlooked. On the basis of OCT modelling, the maintenance core task was defined as balancing between three critical demands: anticipating the condition of the plant and conducting preventive maintenance accordingly, reacting to unexpected technical faults and monitoring and reflecting on the effects of maintenance actions and the condition of the plant. The results indicate that safety was highly valued at all three plants, and in that sense they all had strong safety cultures. In other respects the cultural features were quite different, and thus the culturally-accepted means of maintaining high safety also differed. The handicraft nature of maintenance work was emphasised as a source of identity at the NPPs. Overall, the importance of safety was taken for granted, but the cultural norms concerning the appropriate means to guarantee it were little reflected. A sense of control, personal responsibility and organizational changes emerged as challenging issues at all the plants. The study shows that in complex sociotechnical systems it is both necessary and possible to analyse the safety and effectiveness of the organizational culture. Safety in complex sociotechnical systems cannot be understood or managed without understanding the demands of the organizational core task and managing the dynamics between the three elements of the organizational culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives. The sentence span task is a complex working memory span task used for estimating total working memory capacity for both processing (sentence comprehension) and storage (remembering a set of words). Several traditional models of working memory suggest that performance on these tasks relies on phonological short-term storage. However, long-term memory effects as well as the effects of expertise and strategies have challenged this view. This study uses a working memory task that aids the creation of retrieval structures in the form of stories, which have been shown to form integrated structures in longterm memory. The research question is whether sentence and story contexts boost memory performance in a complex working memory task. The hypothesis is that storage of the words in the task takes place in long-term memory. Evidence of this would be better recall for words as parts of sentences than for separate words, and, particularly, a beneficial effect for words as part of an organized story. Methods. Twenty stories consisting of five sentences each were constructed, and the stimuli in all experimental conditions were based on these sentences and sentence-final words, reordered and recombined for the other conditions. Participants read aloud sets of five sentences that either formed a story or not. In one condition they had to report all the last words at the end of the set, in another, they memorised an additional separate word with each sentence. The sentences were presented on the screen one word at a time (500 ms). After the presentation of each sentence, the participant verified a statement about the sentence. After five sentences, the participant repeated back the words in correct positions. Experiment 1 (n=16) used immediate recall, experiment 2 (n=21) both immediate recall and recall after a distraction interval (the operation span task). In experiment 2 a distracting mental arithmetic task was presented instead of recall in half of the trials, and an individual word was added before each sentence in the two experimental conditions when the participants were to memorize the sentence final words. Subjects also performed a listening span task (in exp.1) or an operation span task (exp.2) to allow comparison of the estimated span and performance in the story task. Results were analysed using correlations, repeated measures ANOVA and a chi-square goodness of fit test on the distribution of errors. Results and discussion. Both the relatedness of the sentences (the story condition) and the inclusion of the words into sentences helped memory. An interaction showed that the story condition had a greater effect on last words than separate words. The beneficial effect of the story was shown in all serial positions. The effects remained in delayed recall. When the sentences formed stories, performance in verification of the statements about sentence context was better. This, as well as the differing distributions of errors in different experimental conditions, suggest different levels of representation are in use in the different conditions. In the story condition, the nature of these representations could be in the form of an organized memory structure, a situation model. The other working memory tasks had only few week correlations to the story task. This could indicate that different processes are in use in the tasks. The results do not support short-term phonological storage, but instead are compatible with the words being encoded to LTM during the task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses three important issues in tree bucking optimization in the context of cut-to-length harvesting. (1) Would the fit between the log demand and log output distributions be better if the price and/or demand matrices controlling the bucking decisions on modern cut-to-length harvesters were adjusted to the unique conditions of each individual stand? (2) In what ways can we generate stand and product specific price and demand matrices? (3) What alternatives do we have to measure the fit between the log demand and log output distributions, and what would be an ideal goodness-of-fit measure? Three iterative search systems were developed for seeking stand-specific price and demand matrix sets: (1) A fuzzy logic control system for calibrating the price matrix of one log product for one stand at a time (the stand-level one-product approach); (2) a genetic algorithm system for adjusting the price matrices of one log product in parallel for several stands (the forest-level one-product approach); and (3) a genetic algorithm system for dividing the overall demand matrix of each of the several log products into stand-specific sub-demands simultaneously for several stands and products (the forest-level multi-product approach). The stem material used for testing the performance of the stand-specific price and demand matrices against that of the reference matrices was comprised of 9 155 Norway spruce (Picea abies (L.) Karst.) sawlog stems gathered by harvesters from 15 mature spruce-dominated stands in southern Finland. The reference price and demand matrices were either direct copies or slightly modified versions of those used by two Finnish sawmilling companies. Two types of stand-specific bucking matrices were compiled for each log product. One was from the harvester-collected stem profiles and the other was from the pre-harvest inventory data. Four goodness-of-fit measures were analyzed for their appropriateness in determining the similarity between the log demand and log output distributions: (1) the apportionment degree (index), (2) the chi-square statistic, (3) Laspeyres quantity index, and (4) the price-weighted apportionment degree. The study confirmed that any improvement in the fit between the log demand and log output distributions can only be realized at the expense of log volumes produced. Stand-level pre-control of price matrices was found to be advantageous, provided the control is done with perfect stem data. Forest-level pre-control of price matrices resulted in no improvement in the cumulative apportionment degree. Cutting stands under the control of stand-specific demand matrices yielded a better total fit between the demand and output matrices at the forest level than was obtained by cutting each stand with non-stand-specific reference matrices. The theoretical and experimental analyses suggest that none of the three alternative goodness-of-fit measures clearly outperforms the traditional apportionment degree measure. Keywords: harvesting, tree bucking optimization, simulation, fuzzy control, genetic algorithms, goodness-of-fit

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The topic of this dissertation lies in the intersection of harmonic analysis and fractal geometry. We particulary consider singular integrals in Euclidean spaces with respect to general measures, and we study how the geometric structure of the measures affects certain analytic properties of the operators. The thesis consists of three research articles and an overview. In the first article we construct singular integral operators on lower dimensional Sierpinski gaskets associated with homogeneous Calderón-Zygmund kernels. While these operators are bounded their principal values fail to exist almost everywhere. Conformal iterated function systems generate a broad range of fractal sets. In the second article we prove that many of these limit sets are porous in a very strong sense, by showing that they contain holes spread in every direction. In the following we connect these results with singular integrals. We exploit the fractal structure of these limit sets, in order to establish that singular integrals associated with very general kernels converge weakly. Boundedness questions consist a central topic of investigation in the theory of singular integrals. In the third article we study singular integrals of different measures. We prove a very general boundedness result in the case where the two underlying measures are separated by a Lipshitz graph. As a consequence we show that a certain weak convergence holds for a large class of singular integrals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A composition operator is a linear operator between spaces of analytic or harmonic functions on the unit disk, which precomposes a function with a fixed self-map of the disk. A fundamental problem is to relate properties of a composition operator to the function-theoretic properties of the self-map. During the recent decades these operators have been very actively studied in connection with various function spaces. The study of composition operators lies in the intersection of two central fields of mathematical analysis; function theory and operator theory. This thesis consists of four research articles and an overview. In the first three articles the weak compactness of composition operators is studied on certain vector-valued function spaces. A vector-valued function takes its values in some complex Banach space. In the first and third article sufficient conditions are given for a composition operator to be weakly compact on different versions of vector-valued BMOA spaces. In the second article characterizations are given for the weak compactness of a composition operator on harmonic Hardy spaces and spaces of Cauchy transforms, provided the functions take values in a reflexive Banach space. Composition operators are also considered on certain weak versions of the above function spaces. In addition, the relationship of different vector-valued function spaces is analyzed. In the fourth article weighted composition operators are studied on the scalar-valued BMOA space and its subspace VMOA. A weighted composition operator is obtained by first applying a composition operator and then a pointwise multiplier. A complete characterization is given for the boundedness and compactness of a weighted composition operator on BMOA and VMOA. Moreover, the essential norm of a weighted composition operator on VMOA is estimated. These results generalize many previously known results about composition operators and pointwise multipliers on these spaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we study a few games related to non-wellfounded and stationary sets. Games have turned out to be an important tool in mathematical logic ranging from semantic games defining the truth of a sentence in a given logic to for example games on real numbers whose determinacies have important effects on the consistency of certain large cardinal assumptions. The equality of non-wellfounded sets can be determined by a so called bisimulation game already used to identify processes in theoretical computer science and possible world models for modal logic. Here we present a game to classify non-wellfounded sets according to their branching structure. We also study games on stationary sets moving back to classical wellfounded set theory. We also describe a way to approximate non-wellfounded sets with hereditarily finite wellfounded sets. The framework used to do this is domain theory. In the Banach-Mazur game, also called the ideal game, the players play a descending sequence of stationary sets and the second player tries to keep their intersection stationary. The game is connected to precipitousness of the corresponding ideal. In the pressing down game first player plays regressive functions defined on stationary sets and the second player responds with a stationary set where the function is constant trying to keep the intersection stationary. This game has applications in model theory to the determinacy of the Ehrenfeucht-Fraisse game. We show that it is consistent that these games are not equivalent.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The previous academic research on Finnish peacekeeping has clarified the operative and historical aspects of Finnish peacekeeping lacking the view of an individual who does the actual peacekeeping work. This research is based on the underlying theoretical assumption of human beings possessing different kinds of talents and intelligences creating a holistic entity. In this broad perspective spirituality was explored as an umbrella concept, as a holistic ability or talent, that can be explored as the deepest aspect of defining what it means to be human. The theoretical framework incorporated the concept of an intelligence, which is defined in Gardner's theory of multiple intelligences as the ability to solve problems, or to create products, that are valued within one or more cultural settings (Gardner, 1993, x). The viability of this theory was studied in the sample of Finnish peacekeepers. Spirituality in the theoretical and conceptual horizon was viewed as an extension of Gardner's theory of intelligences as one potential Gardnerian intelligence candidate. In addition to Gardner's theory, spirituality was explored as sensitivity which includes capacities such as sensing awareness, sensing mystery and sensing value (Hay, 1998). Also the practical aspects of spirituality were taken in account as shown in our everyday lives giving us the direction and influencing our social responsibilities and concerns (Bradford, 1995). Spirituality was explored also involving the element of the peacekeepers' community, the element of personal moral orientations and in the domain of religion and coping. The purpose of this research aimed in two dimensions. First, the aim was to outline the intelligence profile and the spiritual sensitivity profile of peacekeepers. Second, the aim was to understand qualitatively the nature of peacekeepers' spirituality The research interests were studied with different kinds of peacekeepers. Applying the mixed methods approach the research was conducted in two phases: first the former SFOR peacekeepers (N=6) were interviewed and the data was analysed. Inspired by the primary findings of these interviews, the data for the case-study of one peacekeeper was collected in co-operation with one former SFOR peacekeeper (N=1). In the second phase the data was collected from KFOR peacekeepers through the quantitative MI-Survey and the spiritual sensitivity survey (N=195). The quantitative method was used to outline the intelligence profile and the spiritual sensitivity profile of peacekeepers (N=195). In the mixed methods approach this method highlighted the general overview of intelligence traits and spiritual sensitivity of peacekeepers. In the mixed methods approach the qualitative method including interviews (N=6) and a case-study of one peacekeeper (N=1) increased subjective, qualitative information of spirituality of peacekeepers. The intelligence profile of peacekeepers highlighted the bodily-kinesthetic and interpersonal dimensions as the practical and social aspects of peacekeepers. Strong inter-item dependencies in the intrapersonal intelligence profile meant that peacekeepers possess a self-reflection and self-knowledge component and they reflect on deep psychological and philosophical issues. Regarding the spiritual sensitivity, peacekeepers found awareness-sensing, mystery-sensing, value-sensing and community-sensing important. The community-sensing emphasised a strong will to advance peace and to help people who are in need: things that are close to the heart of the peacekeepers. These results depicted practicality, being socially capable, and reflecting one's inner world as essential to peacekeepers. Moreover, spirituality as peacekeepers' moral endeavour became clearer because the sub-model of their community-sensing described morally charged destinations: advancing peace and helping people in need. In the qualitative findings peacekeepers articulated justice orientation and rule-following characterising the nature of peacekeepers' moral attitude and moral call (Kohlberg, 1969). An ethic of care (Gilligan, 1982) describes mainly female moral orientation, but the findings revealed that an ethic of care is also an important agent supporting strongly male peacekeepers in their aim to carry out qualitatively good peacekeeping work. The moral endeavour was voiced, when the role of religion in coping meant the assessment of the a way of life, a way of conduct, a way of being truthful to one's own values in confusing surroundings. The practical level of spiritual and religious contemplation was voiced as morally charged inner motivation to fulfil one's duties and at the same time to cope with various peacekeeping challenges. The results of different data sets were combined and interpreted as the moral endeavour, which characterises peacekeepers' spirituality. As the combining result, the perspective of peacekeepers' spirituality is considered moral or at least morally charged.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most prominent objective of the thesis is the development of the generalized descriptive set theory, as we call it. There, we study the space of all functions from a fixed uncountable cardinal to itself, or to a finite set of size two. These correspond to generalized notions of the universal Baire space (functions from natural numbers to themselves with the product topology) and the Cantor space (functions from natural numbers to the {0,1}-set) respectively. We generalize the notion of Borel sets in three different ways and study the corresponding Borel structures with the aims of generalizing classical theorems of descriptive set theory or providing counter examples. In particular we are interested in equivalence relations on these spaces and their Borel reducibility to each other. The last chapter shows, using game-theoretic techniques, that the order of Borel equivalence relations under Borel reduciblity has very high complexity. The techniques in the above described set theoretical side of the thesis include forcing, general topological notions such as meager sets and combinatorial games of infinite length. By coding uncountable models to functions, we are able to apply the understanding of the generalized descriptive set theory to the model theory of uncountable models. The links between the theorems of model theory (including Shelah's classification theory) and the theorems in pure set theory are provided using game theoretic techniques from Ehrenfeucht-Fraïssé games in model theory to cub-games in set theory. The bottom line of the research declairs that the descriptive (set theoretic) complexity of an isomorphism relation of a first-order definable model class goes in synch with the stability theoretical complexity of the corresponding first-order theory. The first chapter of the thesis has slightly different focus and is purely concerned with a certain modification of the well known Ehrenfeucht-Fraïssé games. There we (me and my supervisor Tapani Hyttinen) answer some natural questions about that game mainly concerning determinacy and its relation to the standard EF-game