26 resultados para explanatory variables


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Is oral health becoming a part of the global health culture? Oral health seems to turn out to be part of the global health culture, according to the findings of a thesis-research, Institute of Dentistry, University of Helsinki. The thesis is entitled as “Preadolescents and Their Mothers as Oral Health-Promoting Actors: Non-biologic Determinants of Oral Health among Turkish and Finnish Preadolescents.” The research was supervised by Prof.Murtomaa and led by Dr.A.Basak Cinar. It was conducted as a cross-sectional study of 611 Turkish and 223 Finnish school preadolescents in Istanbul and Helsinki, from the fourth, fifth, and sixth grades, aged 10 to 12, based on self-administered and pre-tested health behavior questionnaires for them and their mothers as well as the youth’s oral health records. Clinically assessed dental status (DMFT) and self-reported oral health of Turkish preadolescents was significantly poorer than the Finns`. A similar association occurred for well-being measures (height and weight, self-esteem), but not for school performance. Turkish preadolescents were more dentally anxious and reported lower mean values of toothbrushing self-efficacy and dietary self-efficacy than did Finns. The Turks less frequently reported recommended oral health behaviors (twice daily or more toothbrushing, sweet consumption on 2 days or less/week, decreased between-meal sweet consumption) than did the Finns. Turkish mothers reported less frequently dental health as being above average and recommended oral health behaviors as well as regular dental visits. Their mean values for dental anxiety was higher and self-efficacy on implementation of twice-daily toothbrushing were lower than those of the Finnish. Despite these differences between the Turks and Finns, the associations found in common for all preadolescents, regardless of cultural differences and different oral health care systems, assessed for the first time in a holistic framework, were as follows: There seems to be interrelation between oral health and general-well being (body height-weight measures, school performance, and self-esteem) among preadolescents: • The body height was an explanatory factor for dental health, underlining the possible common life-course factors for dental health and general well-being. • Better school performance, high levels of self-esteem and self-efficacy were interrelated and they contributed to good oral health. • Good school performance was a common predictor for twice-daily toothbrushing. Self-efficacy and maternal modelling have significant role for maintenance and improvement of both oral- and general health- related behaviors. In addition, there is need for integration of self-efficacy based approaches to promote better oral health. • All preadolescents with high levels of self-efficacy were more likely to report more frequent twice-daily toothbrushing and less frequent sweet consumption. • All preadolescents were likely to imitate toothbrushing and sweet consumption behaviors of their mothers. • High levels of self-efficacy contributed to low dental anxiety in various patterns in both groups. As a conclusion: • Many health-detrimental behaviors arise from the school age years and are unlikely to change later. Schools have powerful influences on children’s development and well-being. Therefore, oral health promotion in schools should be integrated into general health promotion, school curricula, and other activities. • Health promotion messages should be reinforced in schools, enabling children and their families to develop lifelong sustainable positive health-related skills (self-esteem, self-efficacy) and behaviors. • Placing more emphasis on behavioral sciences, preventive approaches, and community-based education during undergraduate studies should encourage social responsibility and health-promoting roles among dentists. Attempts to increase general well-being and to reduce oral health inequalities among preadolescents will remain unsuccessful if the individual factors, as well as maternal and societal influences, are not considered by psycho-social holistic approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research discusses decoupling CAP (Common Agricultural Policy) support and impacts which may occur on grain cultivation area and supply of beef and pork in Finland. The study presents the definitions and studies on decoupled agricultural subsidies, the development of supply of grain, beef and pork in Finland and changes in leading factors affecting supply between 1970 and 2005. Decoupling agricultural subsidies means that the linkage between subsidies and production levels is disconnected; subsidies do not affect the amount produced. The hypothesis is that decoupling will decrease the amounts produced in agriculture substantially. In the supply research, the econometric models which represent supply of agricultural products are estimated based on the data of prices and amounts produced. With estimated supply models, the impacts of changes in prices and public policies, can be forecasted according to supply of agricultural products. In this study, three regression models describing combined cultivation areas of rye, wheat, oats and barley, and the supply of beef and pork are estimated. Grain cultivation area and supply of beef are estimated based on data from 1970 to 2005 and supply of pork on data from 1995 to 2005. The dependencies in the model are postulated to be linear. The explanatory variables in the grain model were average return per hectare, agricultural subsidies, grain cultivation area in the previous year and the cost of fertilization. The explanatory variables in the beef model were the total return from markets and subsidies and the amount of beef production in the previous year. In the pork model the explanatory variables were the total return, the price of piglet, investment subsidies, trend of increasing productivity and the dummy variable of the last quarter of the year. The R-squared of model of grain cultivation area was 0,81, the model of beef supply 0,77 and the model of pork supply 0,82. Development of grain cultivation area and supply of beef and pork was estimated for 2006 - 2013 with this regression model. In the basic scenario, development of explanatory variables in 2006 - 2013 was postulated to be the same as they used to be in average in 1995 - 2005. After the basic scenario the impacts of decoupling CAP subsidies and domestic subsidies on cultivation area and supply were simulated. According to the results of the decoupling CAP subsidies scenario, grain cultivation area decreases from 1,12 million hectares in 2005 to 1,0 million hectares in 2013 and supply of beef from 88,8 million kilos in 2005 to 67,7 million kilos in 2013. Decoupling domestic and investment subsidies will decrease the supply of pork from 194 million kilos in 2005 to 187 million kilos in 2006. By 2013 the supply of pork grows into 203 million kilos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Determination of the environmental factors controlling earth surface processes and landform patterns is one of the central themes in physical geography. However, the identification of the main drivers of the geomorphological phenomena is often challenging. Novel spatial analysis and modelling methods could provide new insights into the process-environment relationships. The objective of this research was to map and quantitatively analyse the occurrence of cryogenic phenomena in subarctic Finland. More precisely, utilising a grid-based approach the distribution and abundance of periglacial landforms were modelled to identify important landscape scale environmental factors. The study was performed using a comprehensive empirical data set of periglacial landforms from an area of 600 km2 at a 25-ha resolution. The utilised statistical methods were generalized linear modelling (GLM) and hierarchical partitioning (HP). GLMs were used to produce distribution and abundance models and HP to reveal independently the most likely causal variables. The GLM models were assessed utilising statistical evaluation measures, prediction maps, field observations and the results of HP analyses. A total of 40 different landform types and subtypes were identified. Topographical, soil property and vegetation variables were the primary correlates for the occurrence and cover of active periglacial landforms on the landscape scale. In the model evaluation, most of the GLMs were shown to be robust although the explanation power, prediction ability as well as the selected explanatory variables varied between the models. The great potential of the combination of a spatial grid system, terrain data and novel statistical techniques to map the occurrence of periglacial landforms was demonstrated in this study. GLM proved to be a useful modelling framework for testing the shapes of the response functions and significances of the environmental variables and the HP method helped to make better deductions of the important factors of earth surface processes. Hence, the numerical approach presented in this study can be a useful addition to the current range of techniques available to researchers to map and monitor different geographical phenomena.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis aims at finding the role of deposit insurance scheme and central bank (CB) in keeping the banking system safe. The thesis also studies the factors associated with long-lasting banking crises. The first essay analyzes the effect of using explicit deposit insurance scheme (EDIS), instead of using implicit deposit insurance scheme (IDIS), on banking crises. The panel data for the period of 1980-2003 includes all countries for which the data on EDIS or IDIS exist. 70% of the countries in the sample are less developed countries (LDCs). About 55% of the countries adopting EDIS also come from LDCs. The major finding is that the using of EDIS increases the crisis probability at a strong significance level. This probability is greater if the EDIS is inefficiently designed allowing higher scope of moral hazard problem. Specifically, the probability is greater if the EDIS provides higher coverage to deposits and if it is less powerful from the legal point of view. This study also finds that the less developed a country is to handle EDIS, the higher the chance of banking crisis. Once the underdevelopment of an economy handling the EDIS is controlled, the EDIS separately is no longer a significant factor of banking crises. The second essay aims at determining whether a country s powerful CB can lessen the instability of the banking sector by minimizing the likelihood of a banking crisis. The data used include indicators of the CB s autonomy for a set of countries over the period of 1980-89. The study finds that in aggregate a more powerful CB lessens the probability of banking crisis. When the CB s authority is disentangled with respect to its responsibilities, the study finds that the longer tenure of CB s chief executive officer and the greater power of CB in assigning interest rate on government loans are necessary for reducing the probability of banking crisis. The study also finds that the probability of crisis reduces more if an autonomous CB can perform its duties in a country with stronger law and order tradition. The costs of long-lasting banking crises are high because both the depositors and the investors lose confidence in the banking system. For a rapid recovery of a crisis, the government very often undertakes one or more crisis resolution policy (CRP) measures. The third essay examines the CRP and other explanatory variables correlated with the durations of banking crises. The major finding is that the CRP measure allowing the regulation forbearance to keep the insolvent banks operative and the public debt relief program are respectively strongly and weakly significant to increase the durations of crises. Some other explanatory variables, which were found by previous studies to be related with the probability of crises to occur, are also correlated with the durations of crises.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A better understanding of stock price changes is important in guiding many economic activities. Since prices often do not change without good reasons, searching for related explanatory variables has involved many enthusiasts. This book seeks answers from prices per se by relating price changes to their conditional moments. This is based on the belief that prices are the products of a complex psychological and economic process and their conditional moments derive ultimately from these psychological and economic shocks. Utilizing information about conditional moments hence makes it an attractive alternative to using other selective financial variables in explaining price changes. The first paper examines the relation between the conditional mean and the conditional variance using information about moments in three types of conditional distributions; it finds that the significance of the estimated mean and variance ratio can be affected by the assumed distributions and the time variations in skewness. The second paper decomposes the conditional industry volatility into a concurrent market component and an industry specific component; it finds that market volatility is on average responsible for a rather small share of total industry volatility — 6 to 9 percent in UK and 2 to 3 percent in Germany. The third paper looks at the heteroskedasticity in stock returns through an ARCH process supplemented with a set of conditioning information variables; it finds that the heteroskedasticity in stock returns allows for several forms of heteroskedasticity that include deterministic changes in variances due to seasonal factors, random adjustments in variances due to market and macro factors, and ARCH processes with past information. The fourth paper examines the role of higher moments — especially skewness and kurtosis — in determining the expected returns; it finds that total skewness and total kurtosis are more relevant non-beta risk measures and that they are costly to be diversified due either to the possible eliminations of their desirable parts or to the unsustainability of diversification strategies based on them.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lahopuun määrästä ja sijoittumisesta ollaan kiinnostuneita paitsi elinympäristöjen monimuotoisuuden, myös ilmakehän hiilen varastoinnin kannalta. Tutkimuksen tavoitteena oli kehittää aluepohjainen laserkeilausdataa hyödyntävä malli lahopuukohteiden paikantamiseksi ja lahopuun määrän estimoimiseksi. Samalla tutkittiin mallin selityskyvyn muuttumista mallinnettavan ruudun kokoa suurennettaessa. Tutkimusalue sijaitsi Itä-Suomessa Sonkajärvellä ja koostui pääasiassa nuorista hoidetuista talousmetsistä. Tutkimuksessa käytettiin harvapulssista laserkeilausdataa sekä kaistoittain mitattua maastodataa kuolleesta puuaineksesta. Aineisto jaettiin siten, että neljäsosa datasta oli käytössä mallinnusta varten ja loput varattiin valmiiden mallien testaamiseen. Lahopuun mallintamisessa käytettiin sekä parametrista että ei-parametrista mallinnusmenetelmää. Logistisen regression avulla erikokoisille (0,04, 0,20, 0,32, 0,52 ja 1,00 ha) ruuduille ennustettiin todennäköisyys lahopuun esiintymiselle. Muodostettujen mallien selittävät muuttujat valittiin 80 laserpiirteen ja näiden muunnoksien joukosta. Mallien selittävät muuttujat valittiin kolmessa vaiheessa. Aluksi muuttujia tarkasteltiin visuaalisesti kuvaamalla ne lahopuumäärän suhteen. Ensimmäisessä vaiheessa sopivimmiksi arvioitujen muuttujien selityskykyä testattiin mallinnuksen toisessa vaiheessa yhden muuttujan mallien avulla. Lopullisessa usean muuttujan mallissa selittävien muuttujien kriteerinä oli tilastollinen merkitsevyys 5 % riskitasolla. 0,20 hehtaarin ruutukoolle luotu malli parametrisoitiin muun kokoisille ruuduille. Logistisella regressiolla toteutetun parametrisen mallintamisen lisäksi, 0,04 ja 1,0 hehtaarin ruutukokojen aineistot luokiteltiin ei-parametrisen CART-mallinnuksen (Classification and Regression Trees) avulla. CARTmenetelmällä etsittiin aineistosta vaikeasti havaittavia epälineaarisia riippuvuuksia laserpiirteiden ja lahopuumäärän välillä. CART-luokittelu tehtiin sekä lahopuustoisuuden että lahopuutilavuuden suhteen. CART-luokituksella päästiin logistista regressiota parempiin tuloksiin ruutujen luokituksessa lahopuustoisuuden suhteen. Logistisella mallilla tehty luokitus parani ruutukoon suurentuessa 0,04 ha:sta(kappa 0,19) 0,32 ha:iin asti (kappa 0,38). 0,52 ha:n ruutukoolla luokituksen kappa-arvo kääntyi laskuun (kappa 0,32) ja laski edelleen hehtaarin ruutukokoon saakka (kappa 0,26). CART-luokitus parani ruutukoon kasvaessa. Luokitustulokset olivat logistista mallinnusta parempia sekä 0,04 ha:n (kappa 0,24) että 1,0 ha:n (kappa 0,52) ruutukoolla. CART-malleilla määritettyjen ruutukohtaisten lahopuutilavuuksien suhteellinen RMSE pieneni ruutukoon kasvaessa. 0,04 hehtaarin ruutukoolla koko aineiston lahopuumäärän suhteellinen RMSE oli 197,1 %, kun hehtaarin ruutukoolla vastaava luku oli 120,3 %. Tämän tutkimuksen tulosten perusteella voidaan todeta, että maastossa mitatun lahopuumäärän ja tutkimuksessa käytettyjen laserpiirteiden yhteys on pienellä ruutukoolla hyvin heikko, mutta vahvistuu hieman ruutukoon kasvaessa. Kun mallinnuksessa käytetty ruutukoko kasvaa, pienialaisten lahopuukeskittymien havaitseminen kuitenkin vaikeutuu. Tutkimuksessa kohteen lahopuustoisuus pystyttiin kartoittamaan kohtuullisesti suurella ruutukoolla, mutta pienialaisten kohteiden kartoittaminen ei onnistunut käytetyillä menetelmillä. Pienialaisten kohteiden paikantaminen laserkeilauksen avulla edellyttää jatkotutkimusta erityisesti tiheäpulssisen laserdatan käytöstä lahopuuinventoinneissa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää suomalaisen alkuperäiskarjan lihan potentiaalista kysyntää. Alkuperäiskarjan lihan erikoistuotemarkkinat voivat auttaa pitämään uhanalaiset, kotimaiset karjarodut tuotantokäytössä. Näin ollen erikoistuotemarkkinat voivat auttaa arvokkaiden suomalaisten eläingeenivarojen säilyttämisessä. Koska alkuperäiskarjan lihan tuotannon kannattavuus riippuu lihasta saatavasta lisähinnasta, tutkimuksen tavoitteena on myös tutkia, millainen kuluttajien maksuhalukkuus alkuperäiskarjan lihasta on verrattuna tavanomaiseen lihaan. Tutkimusaineisto kerättiin Maa- ja elintarviketalouden tutkimuskeskuksen ja Kuluttajatutkimuskeskuksen suunnittelemalla kyselytutkimuksella keväällä 2010. Tutkimuksessa käytettiin ehdollisen käyttäytymisen ja ehdollisen arvottamisen menetelmiä ja sen otoskoko on 1623. Kuluttajien ostohalukkuutta ja siihen vaikuttavia tekijöitä tutkittiin sekä binäärisen että ordinaalisen regression malleilla. Kuluttajien maksuhalukkuutta alkuperäiskarjan lihasta ja siihen vaikuttavia tekijöitä tutkittiin grouped data -mallin avulla. Malleissa käytettiin selittävinä muuttujina sosioekonomisten muuttujien lisäksi kuluttajien asenteita ja käyttäytymistä kuvaavia muuttujia. Tutkielman tulosten mukaan jopa 86 % vastaajista ostaisi alkuperäiskarjan lihaa, jos sitä olisi tarjolla kaupoissa. Ostohalukkuutta lisää muun muassa, jos vastaajalla on alle 18-vuotiaita lapsia ja vastaaja arvostaa lähellä tuotettua, paikallista ruokaa sekä ympäristöystävällisyyttä. Miehet ostaisivat alkuperäiskarjan lihaa todennäköisemmin kuin naiset. Suurin osa vastaajista ostaisi alkuperäiskarjan lihaa, jos se olisi samanhintaista kuin tavanomainen liha, mutta noin neljäsosa (23,5 %) vastaajista olisi valmis maksamaan alkuperäiskarjan lihasta korkeampaa hintaa kuin tavanomaisesta lihasta. Maksuhalukkuuteen vaikuttivat positiivisesti muun muassa kuuluminen ympäristöjärjestöön ja korkea tulotaso. Negatiivisesti vaikutti puolestaan esimerkiksi se, että vastaaja on nainen. Keskimääräinen maksuhalukkuus alkuperäiskarjan lihasta oli 6,25 % korkeampi kuin tavanomaisesta lihasta. Maksuhalukkuus alkuperäiskarjan lihasta oli selvästi yhteydessä siihen, kuinka usein vastaaja olisi halukas ostamaan sitä. Maksuhalukkuus oli korkein niillä vastaajilla, jotka haluaisivat ostaa lihaa säännöllisesti.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Juvenile idiopathic arthritis (JIA) is a severe childhood disease usually characterized by long-term morbidity, unpredictable course, pain, and limitations in daily activities and social participation. The disease affects not only the child but also the whole family. The family is expected to adhere to an often very laborious regimen over a long period of time. However, the parental role is incoherently conceptualized in the research field. Pain in JIA is of somatic origin, but psychosocial factors, such as mood and self-efficacy, are critical in the perception of pain and in its impact on functioning. This study examined the factors correlating and possibly explaining pain in JIA, with a special emphasis on the mutual relations between parent- and patient-driven variables. In this patient series pain was not associated with the disease activity. The degree of pain was on average fairly low in children with JIA. When the children were clustered according to age, anxiety and depression, four distinguishable cluster groups significantly associated with pain emerged. One of the groups was described by concept vulnerability because of unfavorable variable associations. Parental depressive and anxiety symptoms accompanied by illness management had a predictive power in discriminating groups of children with varying distress levels. The parent’s and child’s perception of a child’s functional capability, distress, and somatic self-efficacy had independent explanatory power predicting the child’s pain. Of special interest in the current study was self-efficacy, which refers to the belief of an individual that he/she has the ability to engage in the behavior required for tackling the disease. In children with JIA, strong self-efficacy was related to lower levels of pain, depressive symptoms and trait anxiety. This suggests strengthening a child’s sense of self-efficacy, when helping the child to cope with his or her disease. Pain experienced by a child with JIA needs to be viewed in a multidimensional bio-psycho-social context that covers biological, environmental and cognitive behavioral mechanisms. The relations between the parent-child variables are complex and affect pain both directly and indirectly. Developing pain-treatment modalities that recognize the family as a system is also warranted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Patients may need massive volume-replacement therapy after cardiac surgery because of large fluid transfer perioperatively, and the use of cardiopulmonary bypass. Hemodynamic stability is better maintained with colloids than crystalloids but colloids have more adverse effects such as coagulation disturbances and impairment of renal function than do crystalloids. The present study examined the effects of modern hydroxyethyl starch (HES) and gelatin solutions on blood coagulation and hemodynamics. The mechanism by which colloids disturb blood coagulation was investigated by thromboelastometry (TEM) after cardiac surgery and in vitro by use of experimental hemodilution. Materials and methods: Ninety patients scheduled for elective primary cardiac surgery (Studies I, II, IV, V), and twelve healthy volunteers (Study III) were included in this study. After admission to the cardiac surgical intensive care unit (ICU), patients were randomized to receive different doses of HES 130/0.4, HES 200/0.5, or 4% albumin solutions. Ringer’s acetate or albumin solutions served as controls. Coagulation was assessed by TEM, and hemodynamic measurements were based on thermodilutionally measured cardiac index (CI). Results: HES and gelatin solutions impaired whole blood coagulation similarly as measured by TEM even at a small dose of 7 mL/kg. These solutions reduced clot strength and prolonged clot formation time. These effects were more pronounced with increasing doses of colloids. Neither albumin nor Ringer’s acetate solution disturbed blood coagulation significantly. Coagulation disturbances after infusion of HES or gelatin solutions were clinically slight, and postoperative blood loss was comparable with that of Ringer’s acetate or albumin solutions. Both single and multiple doses of all the colloids increased CI postoperatively, and this effect was dose-dependent. Ringer’s acetate had no effect on CI. At a small dose (7 mL/kg), the effect of gelatin on CI was comparable with that of Ringer’s acetate and significantly less than that of HES 130/0.4 (Study V). However, when the dose was increased to 14 and 21 mL/kg, the hemodynamic effect of gelatin rose and became comparable with that of HES 130/0.4. Conclusions: After cardiac surgery, HES and gelatin solutions impaired clot strength in a dose-dependent manner. The potential mechanisms were interaction with fibrinogen and fibrin formation, resulting in decreased clot strength, and hemodilution. Although the use of HES and gelatin inhibited coagulation, postoperative bleeding on the first postoperative morning in all the study groups was similar. A single dose of HES solutions improved CI postoperatively more than did gelatin, albumin, or Ringer’s acetate. However, when administered in a repeated fashion, (cumulative dose of 14 mL/kg or more), no differences were evident between HES 130/0.4 and gelatin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.