976 resultados para three dimenzional organization model
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
Elevated serum phosphorus, calcium, and fibroblast growth factor 23 (FGF23) levels are associated with cardiovascular disease in chronic renal disease. This study evaluated the effects of sucroferric oxyhydroxide (PA21), a new iron-based phosphate binder, versus lanthanum carbonate (La) and sevelamer carbonate (Se), on serum FGF23, phosphorus, calcium, and intact parathyroid hormone (iPTH) concentrations, and the development of vascular calcification in adenine-induced chronic renal failure (CRF) rats. After induction of CRF, renal function was significantly impaired in all groups: uremic rats developed severe hyperphosphatemia, and serum iPTH increased significantly. All uremic rats (except controls) then received phosphate binders for 4 weeks. Hyperphosphatemia and increased serum iPTH were controlled to a similar extent in all phosphate binder-treatment groups. Only sucroferric oxyhydroxide was associated with significantly decreased FGF23. Vascular calcifications of the thoracic aorta were decreased by all three phosphate binders. Calcifications were better prevented at the superior part of the thoracic and abdominal aorta in the PA21 treated rats. In adenine-induced CRF rats, sucroferric oxyhydroxide was as effective as La and Se in controlling hyperphosphatemia, secondary hyperparathyroidism, and vascular calcifications. The role of FGF23 in calcification remains to be confirmed.
Resumo:
Työn päätavoitteena oli tutkia, miten toimintoperusteisia kustannuslaskentaprojekteja (ABC) voidaan ohjata tehokkaammin tulevaisuudessa, ja löytää toimintojohtamisen (ABM) tuki liiketoiminnan strategiselle johtamiselle verkottuvassa yritysmaailmassa. Lisäksi tavoitteena oli määritellä ABC-projektien johtamismallien viitekehys ja vertailla toteutettuja ABC-hankkeita.Toimintolaskentaa ja prosessijohtamista selvitettiin kirjallisuuden ja asiantuntijoiden haastatteluiden avulla. Tutkimuksen empiirisen osan ABC-hankkeiden tiedot pohjautuvat projekteissa tiiviisti mukana olleiden henkilöiden haastatteluihin. Kolmen eri yhtiön hankkeet kuvattiin, ja niiden analysoinnin pohjalta esitettiin mahdollisia jatkotoimenpiteitä liiketoiminnan johtamisen näkökulmasta. Lopuksi työssä pohdittiin myös liiketoiminnan ohjaamisen tulevaisuutta ja jatkotutkimustarpeita.Eri ABC-projektien vertailun pohjalta havaittiin mahdollisia hankekohtaisia kehitysalueita, kuten tietojärjestelmien yhtenäisyys, joustavuus, strateginen oppiminen tulevaisuudessa ja resurssien kohdentaminen. Lisäksi toteutettuja hankkeita arvioitiin liiketoiminnan ohjausta tukevan vuorovaikutteisen resurssisuunnittelun pohjalta. Työssä luotiin ABC-projektien johtamismallien kautta viitekehyksiä mahdollisille jatkohankkeille korostamalla prosessimaisuutta ja ulkoista verkottumista.Lopuksi esitetään, että ennen ABC-hankkeen aloitusta on yrityksen nykyinen tila ja tietojärjestelmät analysoitava. Projektia ei saa toteuttaa vain yhden hyväksi koetun tavan mukaisesti, vaan onnistunut ABC-projektin läpivienti vaatii monien erilaisten toteutus- ja ajattelumallien yhdistämistä. ABC-sovellusten on tuettava sekä operatiivista toimintaa että strategista liiketoiminnan johtamista. Tehokas verkottuminen mahdollistaa lopulta ulkoisten prosessien hallinnan yhdessä organisaation sidosryhmien kanssa.
Resumo:
Tietotekniikkapalvelut on palveluorganisaatio, jonka tavoitteena on tarjota asiakkaalleen toimivat tietoliikenneyhteydet ja toimivat tietojärjestelmät metsäteollisuuden vaatimiin tarpeisiin vastaten. Häiriöttömän ja jatkuvatoimisen tuotannon takaamiseksi tukiprosesseja kehitetään jatkuvasti. Suuren konsernin ongelmana ovat toisistaan poikkeavat käytännöt ja tästä aiheutuvat tehokkuuserot. Tutkimuksessa selvitetään, mitä IT-palveluiden tuotteistaminen merkitsee ja kuvataan tietotekniikkasektorin palvelutuotteiden rakentuminen, hallinta ja käyttömahdollisuus metsäteollisuusyrityksessä. IT-palveluiden tuotteistamisella haetaan sisäistä tehokkuutta sekä laadukkaampaa tulosta. Tuotteistamalla palvelut saadaan tietopääoma paremmin hallintaan ja jakeluun. Suorituskyvyn hallinnan avulla saadaan palveluihin läpinäkyvyyttä ja kehitystoiminta tehostuu. Tuotteistusprojektin tavoitteena on rakentaa globaalit tuotekuvaukset metsäteollisuusyrityksen tietotekniikkapalveluista, joita yksiköt voivat tarkentaa ja syventää haluamalleen tasolle. Tuotekuvausten rakentaminen edellyttää toimintamallien perusteellista läpikäyntiä ja tarvittavien osaamisten selvittämistä sekä palveluiden suorituskykyodotusten määrittämistä. Tietotekniikkasektorin tuotteistusprojektin tuloksena palvelut jaetaan kolmeen palveluryhmään: tietojärjestelmäpalvelut, tietoliikennepalvelut sekä tietoturvapalvelut. Tietojärjestelmäpalvelut kuvataan vielä tarkemmin perustietotekniikka- ja järjestelmäpalvelutuotteiksi. Samoin tietoliikennepalvelut jaetaan datansiirto- ja puheensiirtopalvelutuotteiksi. Palvelutuotteita, siis tuotetietoa, hallitaan ja ylläpidetään tietojärjestelmällä, mistä on liityntä operatiiviseen järjestelmään.
Resumo:
The aim of this thesis was to develop a model, which can predict heat transfer, heat release distribution and vertical temperature profile of gas phase in the furnace of a bubbling fluidized bed (BFB) boiler. The model is based on three separate model components that take care of heat transfer, heat release distribution and mass and energy balance calculations taking into account the boiler design and operating conditions. The model was successfully validated by solving the model parameters on the basis of commercial size BFB boiler test run information and by performing parametric studies with the model. Implementation of the developed model for the Foster Wheeler BFB design procedures will require model validation with existing BFB database and possibly more detailed measurements at the commercial size BFB boilers.
Resumo:
The Annonaceae includes cultivated species of economic interest and represents an important source of information for better understanding the evolution of tropical rainforests. In phylogenetic analyses of DNA sequence data that are used to address evolutionary questions, it is imperative to use appropriate statistical models. Annonaceae are cases in point: Two sister clades, the subfamilies Annonoideae and Malmeoideae, contain the majority of Annonaceae species diversity. The Annonoideae generally show a greater degree of sequence divergence compared to the Malmeoideae, resulting in stark differences in branch lengths in phylogenetic trees. Uncertainty in how to interpret and analyse these differences has led to inconsistent results when estimating the ages of clades in Annonaceae using molecular dating techniques. We ask whether these differences may be attributed to inappropriate modelling assumptions in the phylogenetic analyses. Specifically, we test for (clade-specific) differences in rates of non-synonymous and synonymous substitutions. A high ratio of nonsynonymous to synonymous substitutions may lead to similarity of DNA sequences due to convergence instead of common ancestry, and as a result confound phylogenetic analyses. We use a dataset of three chloroplast genes (rbcL, matK, ndhF) for 129 species representative of the family. We find that differences in branch lengths between major clades are not attributable to different rates of non-synonymous and synonymous substitutions. The differences in evolutionary rate between the major clades of Annonaceae pose a challenge for current molecular dating techniques that should be seen as a warning for the interpretation of such results in other organisms.
Resumo:
The performance of a hydrologic model depends on the rainfall input data, both spatially and temporally. As the spatial distribution of rainfall exerts a great influence on both runoff volumes and peak flows, the use of a distributed hydrologic model can improve the results in the case of convective rainfall in a basin where the storm area is smaller than the basin area. The aim of this study was to perform a sensitivity analysis of the rainfall time resolution on the results of a distributed hydrologic model in a flash-flood prone basin. Within such a catchment, floods are produced by heavy rainfall events with a large convective component. A second objective of the current paper is the proposal of a methodology that improves the radar rainfall estimation at a higher spatial and temporal resolution. Composite radar data from a network of three C-band radars with 6-min temporal and 2 × 2 km2 spatial resolution were used to feed the RIBS distributed hydrological model. A modification of the Window Probability Matching Method (gauge-adjustment method) was applied to four cases of heavy rainfall to improve the observed rainfall sub-estimation by computing new Z/R relationships for both convective and stratiform reflectivities. An advection correction technique based on the cross-correlation between two consecutive images was introduced to obtain several time resolutions from 1 min to 30 min. The RIBS hydrologic model was calibrated using a probabilistic approach based on a multiobjective methodology for each time resolution. A sensitivity analysis of rainfall time resolution was conducted to find the resolution that best represents the hydrological basin behaviour.
Resumo:
The present PhD dissertation consists of three papers, organized in chapters, in the field of behavioral economics. This discipline studies economic behavior of individuals subject to limitations, such as bounded self-interest and bounded willpower. The behavior studied in the present thesis ranges from the complex decision to register as an organ donor, decision¬making in the presence of uncertainty and the decision to give money to a charitable organization. The first chapter aims at testing the effectiveness of an active-decision (AD) mechanism on the decision to become an organ donor in Switzerland, using field experiments. We found that stimulating participants' reflection on the topic of organ donation had a negative effect on the decision to become an organ donor. Moreover, a non-binding commitment nudge reduces putting off the decision, but does not lead to donation rates higher than in the control group. The results suggest that AD may be far more limited than previously thought and raise doubts about the efficacy of engaging potential donors to reflect on the topic of organ donation. Beyond carrying for others, behavioral economics also recognizes that individuals do not evaluate outcomes in absolute terms but rather by comparing them to some reference levels, called reference points. Above the reference points, economic outcomes are perceived as gains, while below these levels the same outcomes are felt as losses. The last two chapters analyze the importance of reference points in the evaluation of economic outcomes. Using a laboratory experiment where subjects played two consecutive lotteries, Chapter 2 studies the speed of adjustment of the reference point. We find that varying the probability of winning the first lottery has no effect on subjects' risk behavior regarding the second lottery. This result indicates a very fast adjustment of the reference point to the latest information. Chapter 3 investigates whether reference points are relevant for charitable preferences. Using actual donation decisions of participants in a laboratory experiment, the results suggest that reference points are not crucial for shaping charitable giving. -- Cette thèse de doctorat consiste en trois articles, organisés en chapitres, dans le domaine de l'économie comportementale. Cette discipline étudie le comportement d'agents économiques sujets à des limitations, telles qu'un égoïsme limité et une volonté limitée. Le comportement étudié dans cette thèse va de la décision complexe de devenir donneur d'organes, la prise de décision en présence d'incertitude à la décision de donner de l'argent à une oeuvre caritative. Le premier chapitre vise à tester l'efficacité d'un mécanisme de « décision active » (active decision, AD) sur la décision de devenir donneur d'organes en Suisse, et ce en recourant à deux expériences hors-laboratoire. Les résultats montrent que stimuler la réflexion des participants sur le don d'organes a un effet négatif sur la décision de devenir donneur. De plus, un mécanisme qui encourage les participants à prendre une décision sur le champ réduit la tendance à procrastiner, mais ne mène pas à un taux de donneurs plus élevé par rapport à un groupe de contrôle. Les résultats suggèrent que le mécanisme AD est bien plus limité que ce qui a été supposé jusqu'à maintenant. De plus, ils suscitent le doute quant à l'efficacité de stimuler la réflexion de potentiels donneurs sur le sujet du don d'organes. En plus de se soucier des autres, l'économie comportementale admet également que les individus n'évaluent pas les résultats de façon absolue, mais en comparant ceux-ci à des niveaux de références, souvent appelés points de référence. Au-dessus de ces points de référence, les résultats sont perçus en tant que gains, tandis qu'en-dessous ces mêmes résultats sont considérés comme des pertes. Les deux derniers chapitres analysent l'importance des points de référence dans diverses situations. A l'aide d'une expérience en laboratoire dans laquelle les participants participent à deux loteries consécutives, le chapitre 2 étudie la vitesse d'ajustement du point de référence. Le résultat montre que varier la probabilité de gagner la première loterie n'a aucun effet sur le comportement en matière de risques concernant la deuxième loterie. Cela indique un ajustement très rapide du point de référence. Le chapitre 3 vise à déterminer si les points de référence ont un rôle majeur concernant les préférences caritatives. Les données relatives aux décisions de don des participants d'une expérience en laboratoire montrent que les points de référence n'influencent pas significativement le don caritatif.
Resumo:
We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in
Resumo:
Työn tavoitteena oli selvittää yksilön ja organisaation tietämystä sekä niiden kasvattamista. Tarkoituksena oli löytää yksilön ja organisaation tietämyksen yhdistäviä tekijöitä. Tutkimusmetodologia oli empiirinen deskriptiivinen tutkimus ja tutkimusmenetelmä oli kvalitatiivinen perustuen kymmeneen teemahaastatteluun. Tutkimuksen tuloksena oli, että tietotyöntekijät kasvattavat omaa ja organisaation tietämystä samankaltaisin keinoin, eikä niiden välillä mielletty olevan suurtakaan eroa. Työssä kokeminen yrityksen ja erehdyksen kautta kerrottiin tärkeimmäksi menetelmäksi kasvattaa omaa tietämystä. Kirjoja arvostettiin erityisen paljon tietämyksen kasvattamisessa ja se ohitti tiedon lähteenä jopa internetin. Työkollegat olivat kolmas tärkeä tietämyksen lähde. Kaksi tärkeintä edellytystä tietämyksen kasvattamisessa olivat opiskelun aikana saatu tietopohja sekä oma kiinnostus ja motivaatio. Organisaation tietämyksen kasvattamisessa tärkeimmäksi tekijäksi nousivat dokumentointi, virheistä oppiminen, sisäinen kommunikointi, tietojärjestelmät ja avoin organisaatiokulttuuri. Tutkimuksen perusteella syntyi kolmivaiheinen malli tietämyksen kasvattamisen kehästä, jonka elementit ovat edellytykset, lähteet ja menetelmät. Kehän keskellä on työssä oppiminen, joka on tärkein tekijä tietämyksen kasvattamisessa. Case-yrityksen toiminta oppivana organisaationa osoitti, että yrityksessä tulisi panostaa erityisesti oppimisen tukemiseen ja johtamiseen. Myös tiedon dokumentointi ja muuttaminen avoimeen muotoon sekä yrityksen prosessit tarvitsevat selkiyttämistä. Organisaation avoimuus ja ilmapiiri osoittautuivat hyviksi, mikä auttaa osaltaan tietämyksen kasvattamista.
Resumo:
Tutkielman tarkoituksena oli mallintaa varastonhallintajärjestelmä, joka olisi sopiva case yritykselle. Tutkimus aloitettiin case yrityksen varastonhallinan nykytilan kartoituksella, jonka jälkeen tutkittiin varastonhallinnan eri osa-alueisiin. Varastonhallinnan osa-alueista käsiteltiin varastotyyppejä, motiiveja, tavoitteita, kysynnän ennustamista sekä erilaisia varastonhallinnan työkaluja. Sen lisäksi tutkittiin erilaisia varaston täydennysmalleja. Teoriaosuudessa käsiteltiin lisäksi kolmea erilaista tietojärjestelmätyyppiä: toiminnanohjausjärjestelmää, sähköisen kaupankäynnin järjestelmää sekä räätälöityä järjestelmää. Tutkimussuunnitelmassa nämä kolme järjestelmää rajattiin vaihtoehdoiksi, joista jokin valittaisiin case yrityksen varastonhallintajärjestelmäksi. Teorian ja nykytilan pohjalta tehtiin viitekehys, jossa esiteltiin varastonhallintajärjestelmän tieto- ja toiminnallisuusominaisuuksia. Nämä ominaisuudet priorisoitiin neljään eri luokkaan ominaisuuden kriittisyyden mukaan. Järjestelmävaihtoehdot arvioitiin viitekehyksen kriteerien mukaisesti, miten helposti ominaisuudet olisivat toteutettavissa eri vaihtoehdoissa. Tulokset laskettiin näiden arviointien perusteella, jonka jälkeen tulosten analysoinnissa huomattiin, että toiminnanohjausjärjestelmä sopisi parhaiten case yrityksen varastonhallintajärjestelmäksi.
Resumo:
Among unidentified gamma-ray sources in the galactic plane, there are some that present significant variability and have been proposed to be high-mass microquasars. To deepen the study of the possible association between variable low galactic latitude gamma-ray sources and microquasars, we have applied a leptonic jet model based on the microquasar scenario that reproduces the gamma-ray spectrum of three unidentified gamma-ray sources, 3EG J1735-1500, 3EG J1828+0142 and GRO J1411-64, and is consistent with the observational constraints at lower energies. We conclude that if these sources were generated by microquasars, the particle acceleration processes could not be as efficient as in other objects of this type that present harder gamma-ray spectra. Moreover, the dominant mechanism of high-energy emission should be synchrotron self-Compton (SSC) scattering, and the radio jets may only be observed at low frequencies. For each particular case, further predictions of jet physical conditions and variability generation mechanisms have been made in the context of the model. Although there might be other candidates able to explain the emission coming from these sources, microquasars cannot be excluded as counterparts. Observations performed by the next generation of gamma-ray instruments, like GLAST, are required to test the proposed model.
Resumo:
Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.
Resumo:
Background: The public health burden of coronary artery disease (CAD) is important. Perfusion cardiac magnetic resonance (CMR) is generally accepted to detect and monitor CAD. Few studies have so far addressed its costs and costeffectiveness. Objectives: To compare in a large CMR registry the costs of a CMR-guided strategy vs two hypothetical invasive strategies for the diagnosis and the treatment of patients with suspected CAD. Methods: In 3'647 patients with suspected CAD included prospectively in the EuroCMR Registry (59 centers; 18 countries) costs were calculated for diagnostic examinations, revascularizations as well as for complication management over a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive X-ray coronary angiography (CXA) and revascularization at the discretion of the treating physician (=CMR+CXA strategy). Ischemia was found in 20.9% of patients and 17.4% of them were revascularized. In ischemia-negative patients by CMR, cardiac death and non-fatal myocardial infarctions occurred in 0.38%/y. In a hypothetical invasive arm the costs were calculated for an initial CXA followed by FFR testing in vessels with ≥50% diameter stenoses (=CXA+FFR strategy). To model this hypothetical arm, the same proportion of ischemic patients and outcome was assumed as for the CMR+CXA strategy. The coronary stenosis - FFR relationship reported in the literature was used to derive the proportion of patients with ≥50% diameter stenoses (Psten) in the study cohort. The costs of a CXA-only strategy were also calculated. Calculations were performed from a third payer perspective for the German, UK, Swiss, and US healthcare systems.
Resumo:
In this thesis a model for managing the product data in a product transfer project was created for ABB Machines. This model was then applied for the ongoing product transfer project during its planning phase. Detailed information about the demands and challenges in product transfer projects was acquired by analyzing previous product transfer projects in participating organizations. This analysis and the ABB Gate Model were then used as a base for the creation of the model for managing the product data in a product transfer project. The created model shows the main tasks during each phase in the project, their sub-tasks and relatedness on general level. Furthermore the model emphasizes need for detailed analysis of the situation during the project planning phase. The created model for managing the product data in a product transfer project was applied into ongoing project two main areas; manufacturing instructions and production item data. The results showed that the greatest challenge considering the product transfer project in previously mentioned areas is the current state of the product data. Based on the findings, process and resource proposals for both the ongoing product transfer project and the BU Machines were given. For manufacturing instructions it is necessary to create detailed process instructions in receiving organizations own language for each department so that the manufacturing instructions can be used as a training material during the training in sending organization. For production item data the English version of the bill of materials needs to be fully in English. In addition it needs to be ensured that bill of materials is updated and these changes implemented before the training in sending organization begins.