26 resultados para Nonlinear problems
Resumo:
In this study I offer a diachronic solution for a number of difficult inflectional endings in Old Church Slavic nominal declensions. In this context I address the perhaps most disputed and the most important question of the Slavic nominal inflectional morphology: whether there was in Proto-Slavic an Auslautgesetz (ALG), a law of final syllables, that narrowed the Proto-Indo-European vowel */o/ to */u/ in closed word-final syllables. In addition, the work contains an exhaustive morphological classification of the nouns and adjectives that occur in canonical Old Church Slavic. I argue that Proto-Indo-European */o/ became Proto-Slavic */u/ before word-final */s/ and */N/. This conclusion is based on the impossibility of finding credible analogical (as opposed to phonological) explanations for the forms supporting the ALG hypothesis, and on the survival of the neuter gender in Slavic. It is not likely that the */o/-stem nominative singular ending */-u/ was borrowed from the accusative singular, because the latter would have been the only paradigmatic form with the stem vowel */-u-/. It is equally unlikely that the ending */-u/ was borrowed from the */u/-stems, because the latter constituted a moribund class. The usually stated motivation for such an analogical borrowing, i.e. a need to prevent the merger of */o/-stem masculines with neuters of the same class, is not tenable. Extra-Slavic, as well as intra-Slavic evidence suggests that phonologically-triggered mergers between two semantically opaque genders do not tend to be prevented, but rather that such mergers lead to the loss of the gender opposition in question. On the other hand, if */-os/ had not become */-us/, most nouns and, most importantly, all adjectives and pronouns would have lost the formal distinction between masculines and neuters. This would have necessarily resulted in the loss of the neuter gender. A new explanation is given for the most apparent piece of evidence against the ALG hypothesis, the nominative-accusative singular of the */es/-stem neuters, e.g. nebo 'sky'. I argue that it arose in late Proto-Slavic dialects, replacing regular nebe, under the influence of the */o/- and */yo/-stems where a correlation had emerged between a hard root-final consonant and the termination -o, on the one hand, and a soft root-final consonant and the termination -e, on the other.
Resumo:
Due to the improved prognosis of many forms of cancer, an increasing number of cancer survivors are willing to return to work after their treatment. It is generally believed, however, that people with cancer are either unemployed, stay at home, or retire more often than people without cancer. This study investigated the problems that cancer survivors experience on the labour market, as well as the disease-related, sociodemographic and psychosocial factors at work that are associated with the employment and work ability of cancer survivors. The impact of cancer on employment was studied combining the data of Finnish Cancer Registry and census data of the years 1985, 1990, 1995 or 1997 of Statistics Finland. There were two data sets containing 46 312 and 12 542 people with cancer. The results showed that cancer survivors were slightly less often employed than their referents. Two to three years after the diagnosis the employment rate of the cancer survivors was 9% lower than that of their referents (64% vs. 73%), whereas the employment rate was the same before the diagnosis (78%). The employment rate varied greatly according to the cancer type and education. The probability of being employed was greater in the lower than in the higher educational groups. People with cancer were less often employed than people without cancer mainly because of their higher retirement rate (34% vs. 27%). As well as employment, retirement varied by cancer type. The risk of retirement was twofold for people having cancer of the nervous system or people with leukaemia compared to their referents, whereas people with skin cancer, for example, did not have an increased risk of retirement. The aim of the questionnaire study was to investigate whether the work ability of cancer survivors differs from that of people without cancer and whether cancer had impaired their work ability. There were 591 cancer survivors and 757 referents in the data. Even though current work ability of cancer survivors did not differ between the survivors and their referents, 26% of cancer survivors reported that their physical work ability, and 19% that their mental work ability had deteriorated due to cancer. The survivors who had other diseases or had had chemotherapy, most often reported impaired work ability, whereas survivors with a strong commitment to their work organization, or a good social climate at work, reported impairment less frequently. The aim of the other questionnaire study containing 640 people with the history of cancer was to examine extent of social support that cancer survivors needed, and had received from their work community. The cancer survivors had received most support from their co-workers, and they hoped for more support especially from the occupational health care personnel (39% of women and 29% of men). More support was especially needed by men who had lymphoma, had received chemotherapy or had a low education level. The results of this study show that the majority of the survivors are able to return to work. There is, however, a group of cancer survivors who leave work life early, have impaired work ability due to their illness, and suffer from lack of support from their work place and the occupational health services. Treatment-related, as well as sociodemographic factors play an important role in survivors' work-related problems, and presumably their possibilities to continue working.
Resumo:
The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The aim of the present study is to analyze Confucian understandings of the Christian doctrine of salvation in order to find the basic problems in the Confucian-Christian dialogue. I will approach the task via a systematic theological analysis of four issues in order to limit the thesis to an appropriate size. They are analyzed in three chapters as follows: 1. The Confucian concept concerning the existence of God. Here I discuss mainly the issue of assimilation of the Christian concept of God to the concepts of Sovereign on High (Shangdi) and Heaven (Tian) in Confucianism. 2. The Confucian understanding of the object of salvation and its status in Christianity. 3. The Confucian understanding of the means of salvation in Christianity. Before beginning this analysis it is necessary to clarify the vast variety of controversies, arguments, ideas, opinions and comments expressed in the name of Confucianism; thus, clear distinctions among different schools of Confucianism are given in chapter 2. In the last chapter I will discuss the results of my research in this study by pointing out the basic problems that will appear in the analysis. The results of the present study provide conclusions in three related areas: the tacit differences in the ways of thinking between Confucians and Christians, the basic problems of the Confucian-Christian dialogue, and the affirmative elements in the dialogue. In addition to a summary, a bibliography and an index, there are also eight appendices, where I have introduced important background information for readers to understand the present study.
Resumo:
The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.