919 resultados para Linear network analysis
Resumo:
Increased renal resistive index (RRI) has been recently associated with target organ damage and cardiovascular or renal outcomes in patients with hypertension and diabetes mellitus. However, reference values in the general population and information on familial aggregation are largely lacking. We determined the distribution of RRI, associated factors, and heritability in a population-based study. Families of European ancestry were randomly selected in 3 Swiss cities. Anthropometric parameters and cardiovascular risk factors were assessed. A renal Doppler ultrasound was performed, and RRI was measured in 3 segmental arteries of both kidneys. We used multilevel linear regression analysis to explore the factors associated with RRI, adjusting for center and family relationships. Sex-specific reference values for RRI were generated according to age. Heritability was estimated by variance components using the ASSOC program (SAGE software). Four hundred women (mean age±SD, 44.9±16.7 years) and 326 men (42.1±16.8 years) with normal renal ultrasound had mean RRI of 0.64±0.05 and 0.62±0.05, respectively (P<0.001). In multivariable analyses, RRI was positively associated with female sex, age, systolic blood pressure, and body mass index. We observed an inverse correlation with diastolic blood pressure and heart rate. Age had a nonlinear association with RRI. We found no independent association of RRI with diabetes mellitus, hypertension treatment, smoking, cholesterol levels, or estimated glomerular filtration rate. The adjusted heritability estimate was 42±8% (P<0.001). In a population-based sample with normal renal ultrasound, RRI normal values depend on sex, age, blood pressure, heart rate, and body mass index. The significant heritability of RRI suggests that genes influence this phenotype.
Resumo:
The aim of this work was to quantify low molecular weight organic acids in the rhizosphere of plants grown in a sewage sludge-treated media, and to assess the correlation between the release of the acids and the concentrations of trace-elements in the shoots of the plants. The species utilized in the experiment were cultivated in sand and sewage sludge-treated sand. The acetic, citric, lactic, and oxalic acids, were identified and quantified by high performance liquid chromatography in samples collected from a hydroponics system. Averages obtained from each treatment, concentration of trace elements in shoots and concentration of organic acids in the rhizosphere, were compared by Tukey test, at 5% of probability. Linear correlation analysis was applied to verify an association between the concentrations of organic acids and of trace elements. The average composition of organic acids for all plants was: 43.2, 31.1, 20.4 and 5.3% for acetic, citric, lactic, and oxalic acids, respectively. All organic acids evaluated, except for the citric acid, showed a close statistical agreement with the concentrations of Cd, Cu, Ni, and Zn found in the shoots. There is a positive relationship between organic acids present in the rhizosphere and trace element phytoavailability.
Resumo:
Studies of the structural basis of protein thermostability have produced a confusing picture. Small sets of proteins have been analyzed from a variety of thermophilic species, suggesting different structural features as responsible for protein thermostability. Taking advantage of the recent advances in structural genomics, we have compiled a relatively large protein structure dataset, which was constructed very carefully and selectively; that is, the dataset contains only experimentally determined structures of proteins from one specific organism, the hyperthermophilic bacterium Thermotoga maritima, and those of close homologs from mesophilic bacteria. In contrast to the conclusions of previous studies, our analyses show that oligomerization order, hydrogen bonds, and secondary structure play minor roles in adaptation to hyperthermophily in bacteria. On the other hand, the data exhibit very significant increases in the density of salt-bridges and in compactness for proteins from T.maritima. The latter effect can be measured by contact order or solvent accessibility, and network analysis shows a specific increase in highly connected residues in this thermophile. These features account for changes in 96% of the protein pairs studied. Our results provide a clear picture of protein thermostability in one species, and a framework for future studies of thermal adaptation.
Resumo:
We tested the performance of transcutaneous oxygen monitoring (TcPO2) and pulse oximetry (tcSaO2) in detecting hypoxia in critically ill neonatal and pediatric patients. In 54 patients (178 data sets) with a mean age of 2.4 years (range 1 to 19 years), arterial saturation (SaO2) ranged from 9.5 to 100%, and arterial oxygen tension (PaO2) from 16.4 to 128 mmHg. Linear correlation analysis of pulse oximetry vs measured SaO2 revealed an r value of 0.95 (p less than 0.001) with an equation of y = 21.1 + 0.749x, while PaO2 vs tcPO2 showed a correlation coefficient of r = 0.95 (p less than 0.001) with an equation of y = -1.04 + 0.876x. The mean difference between measured SaO2 and tcSaO2 was -2.74 +/- 7.69% (range +14 to - 29%) and the mean difference between PaO2 and tcPO2 was +7.43 +/- 8.57 mmHg (range -14 to +49 mmHg). Pulse oximetry was reliable at values above 65%, but was inaccurate and overestimated the arterial SaO2 at lower values. TcPO2 tended to underestimate the arterial value with increasing PaO2. Pulse oximetry had the best sensitivity to specificity ratio for hypoxia between 65 and 90% SaO2; for tcPO2 the best results were obtained between 35 and 55 mmHg PaO2.
Resumo:
OBJECTIVE: We assessed the association between birth weight, weight change, and current blood pressure (BP) across the entire age-span of childhood and adolescence in large school-based cohorts in the Seychelles, an island state in the African region. METHODS: Three cohorts were analyzed: 1004 children examined at age 5.5 and 9.1 years, 1886 children at 9.1 and 12.5, and 1575 children at 12.5 and 15.5, respectively. Birth and 1-year anthropometric data were gathered from medical files. The outcome was BP at age 5.5, 9.1, 12.5 or 15.5 years, respectively. Conditional linear regression analysis was used to estimate the relative contribution of changes in weight (expressed in z-score) during different age periods on BP. All analyses were adjusted for height. RESULTS: At all ages, current BP was strongly associated with current weight. Birth weight was not significantly associated with current BP. Upon adjustment for current weight, the association between birth weight and current BP tended to become negative. Conditional linear regression analyses indicated that changes in weight during successive age periods since birth contributed substantially to current BP at all ages. The strength of the association between weight change and current BP increased throughout successive age periods. CONCLUSION: Weight changes during any age period since birth have substantial impact on BP during childhood and adolescence, with BP being more responsive to recent than earlier weight changes.
Resumo:
La gouvernance de l'Internet est une thématique récente dans la politique mondiale. Néanmoins, elle est devenue au fil des années un enjeu économique et politique important. La question a même pris une importance particulière au cours des derniers mois en devenant un sujet d'actualité récurrent. Forte de ce constat, c ette recherche retrace l'histoire de la gouvernance de l'Internet depuis son émergence comme enjeu politique dans les années 1980 jusqu'à la fin du Sommet Mondial sur la Société de l'Information (SMSI) en 2005. Plutôt que de se focaliser sur l'une ou l'autre des institutions impliquées dans la régulation du réseau informatique mondial, cette recherche analyse l'émergence et l'évolution historique d'un espace de luttes rassemblant un nombre croissant d'acteurs différents. Cette évolution est décrite à travers le prisme de la relation dialectique entre élites et non-élites et de la lutte autour de la définition de la gouvernance de l'Internet. Cette thèse explore donc la question de comment les relations au sein des élites de la gouvernance de l'Internet et entre ces élites et les non-élites expliquent l'emergence, l'évolution et la structuration d'un champ relativement autonome de la politique mondiale centré sur la gouvernance de l'Internet. Contre les perspectives dominantes réaliste et libérales, cette recherche s'ancre dans une approche issue de la combinaison des traditions hétérodoxes en économie politique internationale et des apports de la sociologie politique internationale. Celle-ci s'articule autour des concepts de champ, d'élites et d'hégémonie. Le concept de champ, développé par Bourdieu inspire un nombre croissant d'études de la politique mondiale. Il permet à la fois une étude différenciée de la mondialisation et l'émergence d'espaces de lutte et de domination au niveau transnational. La sociologie des élites, elle, permet une approche pragmatique et centrée sur les acteurs des questions de pouvoir dans la mondialisation. Cette recherche utilise plus particulièrement le concept d'élite du pouvoir de Wright Mills pour étudier l'unification d'élites a priori différentes autour de projets communs. Enfin, cette étude reprend le concept néo-gramscien d'hégémonie afin d'étudier à la fois la stabilité relative du pouvoir d'une élite garantie par la dimension consensuelle de la domination, et les germes de changement contenus dans tout ordre international. A travers l'étude des documents produits au cours de la période étudiée et en s'appuyant sur la création de bases de données sur les réseaux d'acteurs, cette étude s'intéresse aux débats qui ont suivi la commercialisation du réseau au début des années 1990 et aux négociations lors du SMSI. La première période a abouti à la création de l'Internet Corporation for Assigned Names and Numbers (ICANN) en 1998. Cette création est le résultat de la recherche d'un consensus entre les discours dominants des années 1990. C'est également le fruit d'une coalition entre intérêts au sein d'une élite du pouvoir de la gouvernance de l'Internet. Cependant, cette institutionnalisation de l'Internet autour de l'ICANN excluait un certain nombre d'acteurs et de discours qui ont depuis tenté de renverser cet ordre. Le SMSI a été le cadre de la remise en cause du mode de gouvernance de l'Internet par les États exclus du système, des universitaires et certaines ONG et organisations internationales. C'est pourquoi le SMSI constitue la seconde période historique étudiée dans cette thèse. La confrontation lors du SMSI a donné lieu à une reconfiguration de l'élite du pouvoir de la gouvernance de l'Internet ainsi qu'à une redéfinition des frontières du champ. Un nouveau projet hégémonique a vu le jour autour d'éléments discursifs tels que le multipartenariat et autour d'insitutions telles que le Forum sur la Gouvernance de l'Internet. Le succès relatif de ce projet a permis une stabilité insitutionnelle inédite depuis la fin du SMSI et une acceptation du discours des élites par un grand nombre d'acteurs du champ. Ce n'est que récemment que cet ordre a été remis en cause par les pouvoirs émergents dans la gouvernance de l'Internet. Cette thèse cherche à contribuer au débat scientifique sur trois plans. Sur le plan théorique, elle contribue à l'essor d'un dialogue entre approches d'économie politique mondiale et de sociologie politique internationale afin d'étudier à la fois les dynamiques structurelles liées au processus de mondialisation et les pratiques localisées des acteurs dans un domaine précis. Elle insiste notamment sur l'apport de les notions de champ et d'élite du pouvoir et sur leur compatibilité avec les anlayses néo-gramsciennes de l'hégémonie. Sur le plan méthodologique, ce dialogue se traduit par une utilisation de méthodes sociologiques telles que l'anlyse de réseaux d'acteurs et de déclarations pour compléter l'analyse qualitative de documents. Enfin, sur le plan empirique, cette recherche offre une perspective originale sur la gouvernance de l'Internet en insistant sur sa dimension historique, en démontrant la fragilité du concept de gouvernance multipartenaire (multistakeholder) et en se focalisant sur les rapports de pouvoir et les liens entre gouvernance de l'Internet et mondialisation. - Internet governance is a recent issue in global politics. However, it gradually became a major political and economic issue. It recently became even more important and now appears regularly in the news. Against this background, this research outlines the history of Internet governance from its emergence as a political issue in the 1980s to the end of the World Summit on the Information Society (WSIS) in 2005. Rather than focusing on one or the other institution involved in Internet governance, this research analyses the emergence and historical evolution of a space of struggle affecting a growing number of different actors. This evolution is described through the analysis of the dialectical relation between elites and non-elites and through the struggle around the definition of Internet governance. The thesis explores the question of how the relations among the elites of Internet governance and between these elites and non-elites explain the emergence, the evolution, and the structuration of a relatively autonomous field of world politics centred around Internet governance. Against dominant realist and liberal perspectives, this research draws upon a cross-fertilisation of heterodox international political economy and international political sociology. This approach focuses on concepts such as field, elites and hegemony. The concept of field, as developed by Bourdieu, is increasingly used in International Relations to build a differentiated analysis of globalisation and to describe the emergence of transnational spaces of struggle and domination. Elite sociology allows for a pragmatic actor-centred analysis of the issue of power in the globalisation process. This research particularly draws on Wright Mill's concept of power elite in order to explore the unification of different elites around shared projects. Finally, this thesis uses the Neo-Gramscian concept of hegemony in order to study both the consensual dimension of domination and the prospect of change contained in any international order. Through the analysis of the documents produced within the analysed period, and through the creation of databases of networks of actors, this research focuses on the debates that followed the commercialisation of the Internet throughout the 1990s and during the WSIS. The first time period led to the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998. This creation resulted from the consensus-building between the dominant discourses of the time. It also resulted from the coalition of interests among an emerging power elite. However, this institutionalisation of Internet governance around the ICANN excluded a number of actors and discourses that resisted this mode of governance. The WSIS became the institutional framework within which the governance system was questioned by some excluded states, scholars, NGOs and intergovernmental organisations. The confrontation between the power elite and counter-elites during the WSIS triggered a reconfiguration of the power elite as well as a re-definition of the boundaries of the field. A new hegemonic project emerged around discursive elements such as the idea of multistakeholderism and institutional elements such as the Internet Governance Forum. The relative success of the hegemonic project allowed for a certain stability within the field and an acceptance by most non-elites of the new order. It is only recently that this order began to be questioned by the emerging powers of Internet governance. This research provides three main contributions to the scientific debate. On the theoretical level, it contributes to the emergence of a dialogue between International Political Economy and International Political Sociology perspectives in order to analyse both the structural trends of the globalisation process and the located practices of actors in a given issue-area. It notably stresses the contribution of concepts such as field and power elite and their compatibility with a Neo-Gramscian framework to analyse hegemony. On the methodological level, this perspective relies on the use of mixed methods, combining qualitative content analysis with social network analysis of actors and statements. Finally, on the empirical level, this research provides an original perspective on Internet governance. It stresses the historical dimension of current Internet governance arrangements. It also criticise the notion of multistakeholde ism and focuses instead on the power dynamics and the relation between Internet governance and globalisation.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
Työn tarkoituksena oli tutkia kuinka kaasukuplat jakautuvat sellususpensioon, kun prosessiolosuhteita muutetaan. Kuplien kokojakauman avulla pyritään kartoittamaan kuinka kaasukuplat pilkkoutuvat ja onko olemassa raja-arvoa, milloin tehon lisäys ei enää pilko sellususpensiossa olevia kuplia pienemmiksi. Jakaumien avulla voidaan mahdollisesti kehittää kaasunpoistoa. Työssä selvitettiin voidaanko kameratekniikkaa käyttää kuplakokojen määrittämiseen sellusulpusta. Läpinäkymätön sellumassa tarjoaa kuvaukselle haasteellisen ympäristön. Myöskään kirjallisuudessa ei vastaavaa menetelmää aikaisemmin oltu käytetty. Kuvatusta materiaalista laskettiin kuplien halkaisijat, joita pyrittiin tarkastelemaan tilastollisesti. Tilastollinen tarkastelu toi eroja mittauspisteiden välille. Kuplien halkaisijoiden perusteella mallinnettiin kuplakokoon vaikuttavat prosessisuureet lineaarisella regressioanalyysillä. Mallinnuksen perusteella saatiinvasteisiin vaikuttavat riippumattomat muuttujat ja niiden matemaattiset malliyhtälöt. Tuloksina saatiin selville, että kuplien kokojakaumissa on eroja sekoitussäiliön eri puolilla. Sekoitussäiliössä suurten kuplien suhteellinen osuus kasvaa kaasupitoisuuden ja sakeuden noustessa. Mallinnuksen tärkeimpänä tuloksena voidaan todeta, että sakeus ja kaasutilavuus vaikuttavat kuplakokoon kasvattavasti. Kierrosnopeuden kasvattaminen pienentää kuplakokoa. Visuaalisen informaation avulla on helpompi ymmärtää kuinka kuplat käyttäytyvät.
Resumo:
The aim of the present study is to determine the level of correlation between the 3-dimensional (3D) characteristics of trabecular bone microarchitecture, as evaluated using microcomputed tomography (μCT) reconstruction, and trabecular bone score (TBS), as evaluated using 2D projection images directly derived from 3D μCT reconstruction (TBSμCT). Moreover, we have evaluated the effects of image degradation (resolution and noise) and X-ray energy of projection on these correlations. Thirty human cadaveric vertebrae were acquired on a microscanner at an isotropic resolution of 93μm. The 3D microarchitecture parameters were obtained using MicroView (GE Healthcare, Wauwatosa, MI). The 2D projections of these 3D models were generated using the Beer-Lambert law at different X-ray energies. Degradation of image resolution was simulated (from 93 to 1488μm). Relationships between 3D microarchitecture parameters and TBSμCT at different resolutions were evaluated using linear regression analysis. Significant correlations were observed between TBSμCT and 3D microarchitecture parameters, regardless of the resolution. Correlations were detected that were strongly to intermediately positive for connectivity density (0.711≤r(2)≤0.752) and trabecular number (0.584≤r(2)≤0.648) and negative for trabecular space (-0.407 ≤r(2)≤-0.491), up to a pixel size of 1023μm. In addition, TBSμCT values were strongly correlated between each other (0.77≤r(2)≤0.96). Study results show that the correlations between TBSμCT at 93μm and 3D microarchitecture parameters are weakly impacted by the degradation of image resolution and the presence of noise.
Resumo:
We assessed the association between several cardiometabolic risk factors (CRFs) (blood pressure, LDL-cholesterol, HDL-cholesterol, triglycerides, uric acid, and glucose) in 390 young adults aged 19-20 years in Seychelles (Indian Ocean, Africa) and body mass index (BMI) measured either at the same time (cross-sectional analysis) or at the age of 12-15 years (longitudinal analysis). BMI tracked markedly between age of 12-15 and age of 19-20. BMI was strongly associated with all considered CRFs in both cross-sectional and longitudinal analyses, with some exceptions. Comparing overweight participants with those having a BMI below the age-specific median, the odds ratios for high blood pressure were 5.4/4.7 (male/female) cross-sectionally and 2.5/3.9 longitudinally (P < 0.05). Significant associations were also found for most other CRFs, with some exceptions. In linear regression analysis including both BMI at age of 12-15 and BMI at age of 19-20, only BMI at age of 19-20 remained significantly associated with most CRFs. We conclude that CRFs are predicted strongly by either current or past BMI levels in adolescents and young adults in this population. The observation that only current BMI remained associated with CRFs when including past and current levels together suggests that weight control at a later age may be effective in reducing CRFs in overweight children irrespective of past weight status.
Resumo:
Pro gradu -tutkielman tavoitteena on operationalisoida T&K- yhteistyön prosessimaista luonnetta, eli tarkemmin sanottuna analysoida T&K-yhteistyösuhteidenmuodostumista ja motiiveja. Tutkielman hypoteesit muodostettiin analysoimalla yrityksen teknologiastrategiaan perustuvia uuden tiedon tuonnin ja olemassa olevan tiedon hyväksikäytön oppimistavoitteita. Motivaatio T&K- yhteistyölle syntyy mahdollisuudesta T&K- projektien riskien jakamiseen. T&K- yhteistyön motiiveja analysoitiin transaktio- ja byrokratiahyötyjen, jotka pohjautuvat mittakaava- ja synergiaeduille, lähteitä arvioiden. Hypoteeseja testattiin 276 suomalaisen teollisuusyrityksen otoksella. Otoksen yrityksillä oliollut T&K- toimintaa. Otos perustuu kyselyyn, joka toteutettiin Lappeenrannan teknillisen yliopiston kauppatieteiden osastolla vuonna 2004. Hypoteeseja testattiin tilastollisilla menetelmillä; lineaarisella regressioanalyysillä, parillisten ja riippumattomien otosten t-testeillä. Validiteetti- ja multikollineaarisuusongelman todennäköisyydet on huomioitu. Hypoteesit vahvistuivat osittain. Teknologisella epävarmuudella ja monimutkaisuudella ei ole suoraa vaikutusta T&K- yhteistyön intensiivisyyteen. Teknologisella epävarmuudella on osittainen vaikutus teknologiastrategian valintaan. Yrityksen transaktio- ja byrokratiahyödyt riippuvat teknologisista kyvykkyyksistä. Vain korkean teknologian alan yritykset saavuttavat hyötyjä myös intensiivisesti T&K- yhteistyösuhteita koordinoimalla. Teknologiaintensiivisyyteen perustuvien erot perustuvat teknologisen tiedon luonteeseen toimialalla. Transaktiokustannusteorian mukainenkustannusten minimointi ja kompetenssiperusteisten teorioiden mukainen strategisointi selittävät komplementaarisesti T&K-yhteistyösuhteiden muodostumista ja yrityksen rajojen määräytymistä.
Resumo:
Tutkielman tavoitteena on selvittää lineaarisen regressioanalyysin avulla paneelidataa käyttäen suomalaisten pörssiyritysten pääomarakenteisiin vaikuttavat tekijät vuosina 1999-2004. Näiden tekijöiden avulla päätellään, mitä pääomarakenneteoriaa/-teorioita nämä yritykset noudattavat. Pääomarakenneteoriat voidaan jakaa kahteen luokkaan sen mukaan, pyritäänkö niissä optimaaliseen pääomarakenteeseen vai ei. Tradeoff- ja siihen liittyvässä agenttiteoriassa pyritään optimaaliseen pääomarakenteeseen. Tradeoff-teoriassa pääomarakenne valitaan punnitsemalla vieraan pääoman hyötyjä ja haittoja. Agenttiteoria on muuten samanlainen kuin tradeoff-teoria, mutta siinä otetaan lisäksi huomioon velan agenttikustannukset. Pecking order - ja ajoitusteoriassa ei pyritä optimaaliseen pääoma-rakenteeseen. Pecking order -teoriassa rahoitus valitaan hierarkian mukaan (tulorahoitus, vieras pääoma, välirahoitus, oma pääoma). Ajoitusteoriassa valitaan se rahoitusmuoto, jota on kannattavinta hankkia vallitsevassa markkinatilanteessa. Empiiristen tulosten mukaan velkaantumisaste riippuu positiivisesti riskistä, vakuudesta ja aineettomasta omaisuudesta. Velkaantumisaste riippuu negatiivisesti likviditeetistä, osaketuotoista ja kannattavuudesta. Osingoilla ei ole vaikutusta velkaantumisasteeseen. Toimialoista teollisuustuotteiden ja -palveluiden sekä perusteollisuuden aloilla on korkeammat velkaantumisasteet kuin muilla toimialoilla. Tulokset tukevat pääosin pecking order -teoriaa ja jonkin verran ajoitusteoriaa. Muut teoriat saavat vain vähäistä tukea.
Resumo:
Tutkimuksen tavoitteena on selvittää, onko perheomistajuus, eli yksityisomistus, kannattavampi omistusmuoto kuin institutionaalinen omistajuus ja, onko yrityksen iällä ja koolla vaikutusta perheyritysten menestymiseen. Aikaisempaan tutkimustietoon tukeutuen, tutkimuksen aluksi käydään myös läpi perheomistajuuteen yleisesti liitettyjä ominaispiirteitä sekä perheyritysten menestymistä verrattuna ei-perheyrityksiin. Empiirinen analyysi perheomistajuuden vaikutuksista yrityksen kannattavuuteen sekä yrityksen iän ja koon vaikutuksista perheyritysten menestymiseen toteutetaan kahden otoksen avulla, jotka koostuvat listaamattomista norjalaisista pienistä ja keskisuurista yrityksistä (pk-yrityksistä). Näin ollen satunnaisotos ja päätoimialaotos, johon listaamattomat pk-yritykset on valittu satunnaisesti Norjan tärkeimmiltä toimialoilta, analysoidaan erikseen. Analyysi toteutetaan käyttäen lineaarista regressioanalyysia. Vaikka satunnaisotoksen perusteella perheyritykset eivät näytä olevan ei-perheyrityksiä kannattavampia, päätoimialaotos osoittaa, että listaamattomissa pk-yrityksissä perhe- eli yksityisomistajuus on merkittävästi institutionaalista omistajuutta kannattavampi omistusmuoto. Eritoten nuoret ja pienet yritykset vastaavat perheyritysten paremmasta kannattavuudesta.
Resumo:
It is generally accepted that between 70 and 80% of manufacturing costs can be attributed to design. Nevertheless, it is difficult for the designer to estimate manufacturing costs accurately, especially when alternative constructions are compared at the conceptual design phase, because of the lack of cost information and appropriate tools. In general, previous reports concerning optimisation of a welded structure have used the mass of the product as the basis for the cost comparison. However, it can easily be shown using a simple example that the use of product mass as the sole manufacturing cost estimator is unsatisfactory. This study describes a method of formulating welding time models for cost calculation, and presents the results of the models for particular sections, based on typical costs in Finland. This was achieved by collecting information concerning welded products from different companies. The data included 71 different welded assemblies taken from the mechanical engineering and construction industries. The welded assemblies contained in total 1 589 welded parts, 4 257 separate welds, and a total welded length of 3 188 metres. The data were modelled for statistical calculations, and models of welding time were derived by using linear regression analysis. Themodels were tested by using appropriate statistical methods, and were found to be accurate. General welding time models have been developed, valid for welding in Finland, as well as specific, more accurate models for particular companies. The models are presented in such a form that they can be used easily by a designer, enabling the cost calculation to be automated.