27 resultados para one percent


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acute heart failure (AHF) is a complex syndrome associated with exceptionally high mortality. Still, characteristics and prognostic factors of contemporary AHF patients have been inadequately studied. Kidney function has emerged as a very powerful prognostic risk factor in cardiovascular disease. This is believed to be the consequence of an interaction between the heart and kidneys, also termed the cardiorenal syndrome, the mechanisms of which are not fully understood. Renal insufficiency is common in heart failure and of particular interest for predicting outcome in AHF. Cystatin C (CysC) is a marker of glomerular filtration rate with properties making it a prospective alternative to the currently used measure creatinine for assessment of renal function. The aim of this thesis is to characterize a representative cohort of patients hospitalized for AHF and to identify risk factors for poor outcome in AHF. In particular, the role of CysC as a marker of renal function is evaluated, including examination of the value of CysC as a predictor of mortality in AHF. The FINN-AKVA (Finnish Acute Heart Failure) study is a national prospective multicenter study conducted to investigate the clinical presentation, aetiology and treatment of, as well as concomitant diseases and outcome in, AHF. Patients hospitalized for AHF were enrolled in the FINN-AKVA study, and mortality was followed for 12 months. The mean age of patients with AHF is 75 years and they frequently have both cardiovascular and non-cardiovascular co-morbidities. The mortality after hospitalization for AHF is high, rising to 27% by 12 months. The present study shows that renal dysfunction is very common in AHF. CysC detects impaired renal function in forty percent of patients. Renal function, measured by CysC, is one of the strongest predictors of mortality independently of other prognostic risk markers, such as age, gender, co-morbidities and systolic blood pressure on admission. Moreover, in patients with normal creatinine values, elevated CysC is associated with a marked increase in mortality. Acute kidney injury, defined as an increase in CysC within 48 hours of hospital admission, occurs in a significant proportion of patients and is associated with increased short- and mid-term mortality. The results suggest that CysC can be used for risk stratification in AHF. Markers of inflammation are elevated both in heart failure and in chronic kidney disease, and inflammation is one of the mechanisms thought to mediate heart-kidney interactions in the cardiorenal syndrome. Inflammatory cytokines such as interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-α) correlate very differently to markers of cardiac stress and renal function. In particular, TNF-α showed a robust correlation to CysC, but was not associated with levels of NT-proBNP, a marker of hemodynamic cardiac stress. Compared to CysC, the inflammatory markers were not strongly related to mortality in AHF. In conclusion, patients with AHF are elderly with multiple co-morbidities, and renal dysfunction is very common. CysC demonstrates good diagnostic properties both in identifying impaired renal function and acute kidney injury in patients with AHF. CysC, as a measure of renal function, is also a powerful prognostic marker in AHF. CysC shows promise as a marker for assessment of kidney function and risk stratification in patients hospitalized for AHF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to deepen the understanding of market segmentation theory by studying the evolution of the concept and by identifying the antecedents and consequences of the theory. The research method was influenced by content analysis and meta-analysis. The evolution of market segmentation theory was studied as a reflection of evolution of marketing theory. According to this study, the theory of market segmentation has its roots in microeconomics and it has been influenced by different disciplines, such as motivation research and buyer behaviour theory. Furthermore, this study suggests that the evolution of market segmentation theory can be divided into four major eras: the era of foundations, development and blossoming, stillness and stagnation, and the era of re-emergence. Market segmentation theory emerged in the mid-1950’s and flourished during the period between mid-1950’s and the late 1970’s. During the 1980’s the theory lost its interest in the scientific community and no significant contributions were made. Now, towards the dawn of the new millennium, new approaches have emerged and market segmentation has gained new attention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern-day economics is increasingly biased towards believing that institutions matter for growth, an argument that has been further enforced by the recent economic crisis. There is also a wide consensus on what these growth-promoting institutions should look like, and countries are periodically ranked depending on how their institutional structure compares with the best-practice institutions, mostly in place in the developing world. In this paper, it is argued that ”non-desirable” or “second-best” institutions can be beneficial for fostering investment and thus providing a starting point for sustained growth, and that what matters is the appropriateness of institutions to the economy’s distance to the frontier or current phase of development. Anecdotal evidence from Japan and South-Korea is used as a motivation for studying the subject and a model is presented to describe this phenomenon. In the model, the rigidity or non-rigidity of the institutions is described by entrepreneurial selection. It is assumed that entrepreneurs are the ones taking part in the imitation and innovation of technologies, and that decisions on whether or not their projects are refinanced comes from capitalists. The capitalists in turn have no entrepreneurial skills and act merely as financers of projects. The model has two periods, and two kinds of entrepreneurs: those with high skills and those with low skills. The society’s choice of whether an imitation or innovation – based strategy is chosen is modeled as the trade-off between refinancing a low-skill entrepreneur or investing in the selection of the entrepreneurs resulting in a larger fraction of high-skill entrepreneurs with the ability to innovate but less total investment. Finally, a real-world example from India is presented as an initial attempt to test the theory. The data from the example is not included in this paper. It is noted that the model may be lacking explanatory power due to difficulties in testing the predictions, but that this should not be seen as a reason to disregard the theory – the solution might lie in developing better tools, not better just better theories. The conclusion presented is that institutions do matter. There is no one-size-fits-all-solution when it comes to institutional arrangements in different countries, and developing countries should be given space to develop their own institutional structures that cater to their specific needs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study deals with how ethnic minorities and immigrants are portrayed in the Finnish print media. The study also asks how media users of various ethnocultural backgrounds make sense of these mediated stories. A more general objective is to elucidate negotiations of belonging and positioning practices in an increasingly complex society. The empirical part of the study is based on content analysis and qualitative close reading of 1,782 articles in five newspapers (Hufvudstadsbladet, Vasabladet, Helsingin Sanomat, Iltalehti and Ilta-Sanomat) during various research periods between 1999 and 2007. Four case studies on print media content are followed up by a focus group study involving 33 newspaper readers of Bosnian, Somalian, Russian, and 'native' Finnish backgrounds. The study draws from different academic and intellectual traditions; mainly media and communication studies, sociology and social psychology. The main theoretical framework employed is positioning theory, as developed by Rom Harré and others. Building on this perspective, situational self-positioning, positioning by others, and media positioning are seen as central practices in the negotiation of belonging. In support of contemporary developments in social sciences, some of these negotiations are seen as occurring in a network type of communicative space. In this space, the media form one of the most powerful institutions in constructing, distributing and legitimising values and ideas of who belongs to 'us', and who does not. The notion of positioning always involves an exclusionary potential. This thesis joins scholars who assert that in order to understand inclusionary and exclusionary mechanisms, the theoretical starting point must be a recognition of a decent and non-humiliating society. When key insights are distilled from the five empirical cases and related to the main theories, one of the major arguments put forward is that the media were first and foremost concerned with a minority actor's rightful or unlawful belonging to the Finnish welfare system. However, in some cases persistent stereotypes concerning some immigrant groups' motivation to work, pay taxes and therefore contribute are so strong that a general idea of individualism is forgotten in favour of racialised and stagnated views. Discussants of immigrant background also claim that the positions provided for minority actors in the media are not easy to identify with; categories are too narrow, journalists are biased, the reporting is simplifying and carries labelling potential. Hence, although the will for the communicative space to be more diverse and inclusive exists — and has also in many cases been articulated in charters, acts and codes — the positioning of ethnic minorities and immigrants differs significantly from the ideal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a study on the changing practices of kinship in Northern India. The change in kinship arrangements, and particularly in intermarriage processes, is traced by analysing the reception of Hindi popular cinema. Films and their role and meaning in people´s lives in India was the object of my research. Films also provided me with a methodology for approaching my other subject-matters: family, marriage and love. Through my discussion of cultural change, the persistence of family as a core value and locus of identity, and the movie discourses depicting this dialogue, I have looked for a possibility of compromise and reconciliation in an Indian context. As the primary form of Indian public culture, cinema has the ability to take part in discourses about Indian identity and cultural change, and alleviate the conflicts that emerge within these discourses. Hindi popular films do this, I argue, by incorporating different familiar cultural narratives in a resourceful way, thus creating something new out of the old elements. The final word, however, is the one of the spectator. The “new” must come from within the culture. The Indian modernity must be imaginable and distinctively Indian. The social imagination is not a “Wild West” where new ideas enter the void and start living a life of their own. The way the young women in Dehra Dun interpreted family dramas and romantic movies highlights the importance of family and continuity in kinship arrangements. The institution of arranged marriage has changed its appearance and gained new alternative modes such as love cum arranged marriage. It nevertheless remains arranged by the parents. In my thesis I have offered a social description of a cultural reality in which movies act as a built-in part. Movies do not work as a distinct realm, but instead intertwine with the social realities of people as a part of a continuum. The social imagination is rooted in the everyday realities of people, as are the movies, in an ontological and categorical sense. According to my research, the links between imagination and social life were not so much what Arjun Appadurai would call global and deterritorialised, but instead local and conventional.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Goals. Specific language impairment (SLI) has a negative impact on child s speech and language development and interaction. Disorder may be associated with a wide range of comorbid problems. In clinical speech therapy it is important to see the child as a whole so that the rehabilitation can be targeted properly. The aim of this study was to describe the linguistic-cognitive and comorbid symptoms of children with SLI at the age of five, as well as to provide an overwiew of the developmental disorders in the families. The study is part of a larger research project, which will examine paths of development and quality of life of children with SLI as young adults. Methods. The data consisted of patient documents of 100 5-year old children, who were examined in Lastenlinna mainly at 1998. Majority of the subjects were boys, and children s primary diagnosis was either F80.1 or F80.2, which was most common, or both. The diagnosis and the information about the linguistic-cognitive status and comorbid symptoms were collected from reports of medical doctors and experts of other fields, as well as mentions related to familiality. Linguistic-cognitive symptoms were divided into subclasses of speech motor functions, prosessing of language, comprehension of language and use of language. Comorbid symptoms were divided into subclasses of interaction, activity and attention, emotional and behavior problems and neurologic problems. Statistical analyses were based mainly on Pearson s Chi Square test. Results and conclusions. Problems in language processing and speech motor functions were most common of the linguistic-cognitive symptoms. Most of the children had symptoms from two or three symptom classes, and it seemed that girls had more symptoms than boys. Usually children did not have any comorbid symptoms, or had them from one or three symptom classes. Of the comorbid symptoms the most prevalent ones were problems in activity and attention and neurological symptoms, which consisted mostly of motoric and visuomotoric symptoms. The most common of the comorbid diagnoses was F82, specific developmental disorder of motor function. According to literature children with SLI may have problems in mental health, but the results of this study did not confirm that. Children with diagnosis F80.2 had more linguistic-cognitive and comorbid symptoms than children with diagnosis F80.1. The cluster analyses based on all the symtoms revealed four subgroups of the subjects. Of the subjects 85 percent had a positive family history of developmental disorders, and the most prevalent problem in the families was delayed speech development. This study outlined the symptom profile of children with SLI and laid a foundation for the future longitudinal study. The results suggested that there are differences between linguistic-cognitive symptoms of boys and girls, which is important to notice especially when assessing and diagnosing children with SLI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.