960 resultados para Misspecification, Sign restrictions, Shock identification, Model validation.
Resumo:
Les membres de la Génération Sandwich (GS) jouent un rôle pivot dans la société : tout en ayant un emploi, ils s'occupent de leurs enfants ou petits-enfants et offrent de l'aide à leurs parents ou beaux-parents (P/BP) fragilisés par leur vieillissement. Les charges de travail coexistantes générées par ces activités représentent pour leur santé un risque potentiel qui pourrait augmenter. Les connaissances sur la GS et sa santé perçue sont cependant insuffisantes pour que les infirmières de santé au travail développent des interventions préventives basées sur des preuves. La majorité des recherches existantes ont considéré la coexistence des charges comme a priori pathogénique. La plupart des études n'ont examiné l'association que d'une ou de deux de ces trois activités avec la santé. Très peu ont utilisé un cadre théorique infirmier. La présente thèse visait à développer les connaissances sur les membres de la GS et leur santé perçue. Aussi, nous avons adopté une des stratégies existantes pour développer des théories en science infirmière - donc pour développer des connaissances infirmières pour intervenir - décrites par Meleis (2012) : la stratégie « de la Théorie à la Recherche à la Théorie ». Premièrement, un cadre de référence infirmier salutogénique a été construit. Il s'est basé sur le modèle de soins Neuman Systems Model, des concepts issus de la théorie Déséquilibre entre Effort et Récompense de Siegrist et d'une recension intégrative des écrits. Il relie les charges de la GS avec la santé perçue et suggère l'existence de facteurs protégeant la santé. Deuxièmement, un dispositif de recherche descriptif corrélationnel exploratoire a été mis en place pour confronter les deux propositions relationnelles du cadre théorique au monde empirique. Des données ont été récoltées au moyen d'un questionnaire électronique rempli par 826 employés d'une administration publique (âge 45-65 ans). Après examen, 23.5% de l'échantillon appartenait à la GS. La probabilité d'appartenir à la GS augmentait avec l'avancement en âge des P/BP, la co-résidence et la présence d'un enfant dans le ménage ; cependant le sexe n'influençait pas cette probabilité. Les analyses n'ont révélé aucune relation entre la charge totale et la santé physique ou mentale des femmes. Néanmoins, il y avait une relation négative entre cette charge et la santé physique des hommes et une relation négative proche du seuil de significativité, sans toutefois l'atteindre, entre cette charge et leur santé mentale. La nature de ces deux dernières relations était principalement le fait de la charge de travail domestique et familiale. Cinq facteurs identifiés théoriquement ont effectivement protégé la santé de la GS de leurs charges coexistantes : l'absence de sur-engagement dans l'activité professionnelle et une grande latitude décisionnelle dans l'aide aux P/BP ont protégé la santé mentale des femmes ; une grande latitude décisionnelle dans l'activité domestique et familiale a protégé la santé mentale des hommes ; l'absence de sur-engagement dans l'aide aux P/BP et des relations de bonne qualité dans l'activité professionnelle ont protégé la santé physique des hommes. S'appuyant sur ces facteur protecteurs de la santé, cette thèse a proposé des pistes afin de développer des interventions pour la prévention primaire en santé au travail qui soient soucieuses de faire évoluer favorablement les inégalités de genre (gender-transformative). Elles ne concernent pas seulement les membres de la GS et les P/BP, mais aussi les employeurs. Troisièmement, comme les deux propositions relationnelles ont plutôt bien supporté la confrontation avec le monde empirique, cette thèse offre des suggestions pour poursuivre le développement de son cadre théorique et tendre vers la création d'une théorie de moyenne portée en science infirmière.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Background: Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables"frequency" and"degree of conflict". In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable"exposure to conflict", as well as considering six"types of ethical conflict". An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods: The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach"s alpha was used to evaluate the instrument"s reliability. All analyses were performed using the statistical software PASW v19. Results: Cronbach"s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions: The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model of ethical conflict is based may be studied separately or in combination. The critical care nurses in this sample present moderate levels of exposure to ethical conflict. This study represents the first evaluation of the ECNQ-CCV.
Resumo:
Matrix metalloproteinases (MMPs) are major executors of extracellular matrix remodeling and, consequently, play key roles in the response of cells to their microenvironment. The experimentally accessible stem cell population and the robust regenerative capabilities of planarians offer an ideal model to study how modulation of the proteolytic system in the extracellular environment affects cell behavior in vivo. Genome-wide identification of Schmidtea mediterranea MMPs reveals that planarians possess four mmp-like genes. Two of them (mmp1 and mmp2) are strongly expressed in a subset of secretory cells and encode putative matrilysins. The other genes (mt-mmpA and mt-mmpB) are widely expressed in postmitotic cells and appear structurally related to membrane-type MMPs. These genes are conserved in the planarian Dugesia japonica. Here we explore the role of the planarian mmp genes by RNA interference (RNAi) during tissue homeostasis and regeneration. Our analyses identify essential functions for two of them. Following inhibition of mmp1 planarians display dramatic disruption of tissues architecture and significant decrease in cell death. These results suggest that mmp1 controls tissue turnover, modulating survival of postmitotic cells. Unexpectedly, the ability to regenerate is unaffected by mmp1(RNAi). Silencing of mt-mmpA alters tissue integrity and delays blastema growth, without affecting proliferation of stem cells. Our data support the possibility that the activity of this protease modulates cell migration and regulates anoikis, with a consequent pivotal role in tissue homeostasis and regeneration. Our data provide evidence of the involvement of specific MMPs in tissue homeostasis and regeneration and demonstrate that the behavior of planarian stem cells is critically dependent on the microenvironment surrounding these cells. Studying MMPs function in the planarian model provides evidence on how individual proteases work in vivo in adult tissues. These results have high potential to generate significant information for development of regenerative and anti cancer therapies.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
BACKGROUND: The purpose of this study was to confirm the prognostic value of pancreatic stone protein (PSP) in patients with severe infections requiring ICU management and to develop and validate a model to enhance mortality prediction by combining severity scores with biomarkers. METHODS: We enrolled prospectively patients with severe sepsis or septic shock in mixed tertiary ICUs in Switzerland (derivation cohort) and Brazil (validation cohort). Severity scores (APACHE [Acute Physiology and Chronic Health Evaluation] II or Simplified Acute Physiology Score [SAPS] II) were combined with biomarkers obtained at the time of diagnosis of sepsis, including C-reactive-protein, procalcitonin (PCT), and PSP. Logistic regression models with the lowest prediction errors were selected to predict in-hospital mortality. RESULTS: Mortality rates of patients with septic shock enrolled in the derivation cohort (103 out of 158) and the validation cohort (53 out of 91) were 37% and 57%, respectively. APACHE II and PSP were significantly higher in dying patients. In the derivation cohort, the models combining either APACHE II, PCT, and PSP (area under the receiver operating characteristic curve [AUC], 0.721; 95% CI, 0.632-0.812) or SAPS II, PCT, and PSP (AUC, 0.710; 95% CI, 0.617-0.802) performed better than each individual biomarker (AUC PCT, 0.534; 95% CI, 0.433-0.636; AUC PSP, 0.665; 95% CI, 0.572-0.758) or severity score (AUC APACHE II, 0.638; 95% CI, 0.543-0.733; AUC SAPS II, 0.598; 95% CI, 0.499-0.698). These models were externally confirmed in the independent validation cohort. CONCLUSIONS: We confirmed the prognostic value of PSP in patients with severe sepsis and septic shock requiring ICU management. A model combining severity scores with PCT and PSP improves mortality prediction in these patients.
Resumo:
RATIONALE: Patients with acute symptomatic pulmonary embolism (PE) deemed to be at low risk for early complications might be candidates for partial or complete outpatient treatment. OBJECTIVES: To develop and validate a clinical prediction rule that accurately identifies patients with PE and low risk of short-term complications and to compare its prognostic ability with two previously validated models (i.e., the Pulmonary Embolism Severity Index [PESI] and the Simplified PESI [sPESI]) METHODS: Multivariable logistic regression of a large international cohort of patients with PE prospectively enrolled in the RIETE (Registro Informatizado de la Enfermedad TromboEmbólica) registry. MEASUREMENTS AND MAIN RESULTS: All-cause mortality, recurrent PE, and major bleeding up to 10 days after PE diagnosis were determined. Of 18,707 eligible patients with acute symptomatic PE, 46 (0.25%) developed recurrent PE, 203 (1.09%) bled, and 471 (2.51%) died. Predictors included in the final model were chronic heart failure, recent immobilization, recent major bleeding, cancer, hypotension, tachycardia, hypoxemia, renal insufficiency, and abnormal platelet count. The area under receiver-operating characteristic curve was 0.77 (95% confidence interval [CI], 0.75-0.78) for the RIETE score, 0.72 (95% CI, 0.70-0.73) for PESI (P < 0.05), and 0.71 (95% CI, 0.69-0.73) for sPESI (P < 0.05). Our RIETE score outperformed the prognostic value of PESI in terms of net reclassification improvement (P < 0.001), integrated discrimination improvement (P < 0.001), and sPESI (net reclassification improvement, P < 0.001; integrated discrimination improvement, P < 0.001). CONCLUSIONS: We built a new score, based on widely available variables, that can be used to identify patients with PE at low risk of short-term complications, assisting in triage and potentially shortening duration of hospital stay.
Resumo:
This study main purpose was the validation of both French and German versions of a Perceived Neighborhood Social Cohesion Questionnaire. The sample group comprised 5065 Swiss men from the "Cohort Study on Substance Use Risk Factors." Multigroup Confirmatory factor analysis showed that a three-factor model fits the data well, which substantiates the generalizability of Perceived Neighborhood Social Cohesion Questionnaire factor structure, regardless of the language. The Perceived Neighborhood Social Cohesion Questionnaire demonstrated excellent homogeneity (α = 95) and split-half reliability (r = .96). The Perceived Neighborhood Social Cohesion Questionnaire was sensitive to community size and participants' financial situation, confirming that it also measures real social conditions. Finally, weak but frequent correlations between Perceived Neighborhood Social Cohesion Questionnaire and alcohol, cigarette, and cannabis dependence were measured.
Resumo:
We have designed and validated a novel generic platform for production of tetravalent IgG1-like chimeric bispecific Abs. The VH-CH1-hinge domains of mAb2 are fused through a peptidic linker to the N terminus of mAb1 H chain, and paired mutations at the CH1-CL interface mAb1 are introduced that force the correct pairing of the two different free L chains. Two different sets of these CH1-CL interface mutations, called CR3 and MUT4, were designed and tested, and prototypic bispecific Abs directed against CD5 and HLA-DR were produced (CD5xDR). Two different hinge sequences between mAb1 and mAb2 were also tested in the CD5xDR-CR3 or -MUT4 background, leading to bispecific Ab (BsAbs) with a more rigid or flexible structure. All four Abs produced bound with good specificity and affinity to CD5 and HLA-DR present either on the same target or on different cells. Indeed, the BsAbs were able to efficiently redirect killing of HLA-DR(+) leukemic cells by human CD5(+) cytokine-induced killer T cells. Finally, all BsAbs had a functional Fc, as shown by their capacity to activate human complement and NK cells and to mediate phagocytosis. CD5xDR-CR3 was chosen as the best format because it had overall the highest functional activity and was very stable in vitro in both neutral buffer and in serum. In vivo, CD5xDR-CR3 was shown to have significant therapeutic activity in a xenograft model of human leukemia.
Resumo:
Contexte : Les dermatophytes sont des champignons filamenteux parasites spécialisés qui dégradent les tissus kératinisés. Ils sont responsables de la plupart des mycoses de la peau, du cuir chevelu et des cheveux, et des ongles. Le choix du traitement des dermatophytoses dépend des symptômes et du dermatophyte incriminé parmi une quinzaine d'espèces possibles. L'identification des dermatophytes se fait en général sur la base des caractères macroscopiques et microscopiques des cultures. L'identification est parfois difficile ou reste incertaine car il peut y avoir des variations d'un isolat à l'autre au sein d'une même espèce. Cependant, les espèces sont facilement identifiées sur la base de séquences d'ADN. En pratique, des séquences d'ADN ribosomique suffisamment polymorphes sont le plus souvent utilisées pour discriminer les espèces de dermatophytes. Des méthodes spécialisées et sophistiquées telles que les séquences d'ADN et la spectrométrie de masse sont de plus en plus proposées dans la littérature pour identifier les dermatophytes. Toutefois, ces méthodes ne peuvent pas être utilisées directement par un médecin dans un cabinet médical. C'est pourquoi des méthodes plus simples basées sur l'observation de caractères phénotypiques des champignons en culture ne devraient pas être abandonnées. Objectif : Etablir une clé d'identification dichotomique se basant sur des caractères macroscopiques et microscopiques permettant une identification fiable du dermatophyte par la culture. Des clés d'identification des espèces seront élaborées et testées pour leur validation en parallèle avec leur identification par des méthodes de Biologie Moléculaire. Créer un outil simple qui pourra être utilisé au laboratoire par des médecins ou des biologistes non spécialisés en mycologie pour identifier les dermatophytes sans avoir recours à une technologie sophistiquée. Méthodes : Inventaire des espèces isolées de 2001 à 2012 au laboratoire de dermatologie du CHUV. Inventaire des caractères phénotypiques permettant de caractériser chaque espèce. Création d'un système dichotomique sur la base des caractères phénotypiques pour séparer et identifier les espèces (clé d'identification des espèces). Résultats attendus : Les résultats attendus sont définis au niveau des objectifs. L'outil doit être accessible pour des personnes inexpérimentées qui pourront alors identifier les dermatophytes. Plus-value : Les dermatophytoses sont fréquemment diagnostiquées. Cet outil est destiné à tous les dermatologues installés et au personnel de laboratoire qui ne sont pas nécessairement spécialisés en la matière.
Resumo:
The identification of biomarkers of vascular cognitive impairment is urgent for its early diagnosis. The aim of this study was to detect and monitor changes in brain structure and connectivity, and to correlate them with the decline in executive function. We examined the feasibility of early diagnostic magnetic resonance imaging (MRI) to predict cognitive impairment before onset in an animal model of chronic hypertension: Spontaneously Hypertensive Rats. Cognitive performance was tested in an operant conditioning paradigm that evaluated learning, memory, and behavioral flexibility skills. Behavioral tests were coupled with longitudinal diffusion weighted imaging acquired with 126 diffusion gradient directions and 0.3 mm(3) isometric resolution at 10, 14, 18, 22, 26, and 40 weeks after birth. Diffusion weighted imaging was analyzed in two different ways, by regional characterization of diffusion tensor imaging (DTI) indices, and by assessing changes in structural brain network organization based on Q-Ball tractography. Already at the first evaluated times, DTI scalar maps revealed significant differences in many regions, suggesting loss of integrity in white and gray matter of spontaneously hypertensive rats when compared to normotensive control rats. In addition, graph theory analysis of the structural brain network demonstrated a significant decrease of hierarchical modularity, global and local efficacy, with predictive value as shown by regional three-fold cross validation study. Moreover, these decreases were significantly correlated with the behavioral performance deficits observed at subsequent time points, suggesting that the diffusion weighted imaging and connectivity studies can unravel neuroimaging alterations even overt signs of cognitive impairment become apparent.
Resumo:
Validation and verification operations encounter various challenges in product development process. Requirements for increasing the development cycle pace set new requests for component development process. Verification and validation usually represent the largest activities, up to 40 50 % of R&D resources utilized. This research studies validation and verification as part of case company's component development process. The target is to define framework that can be used in improvement of the validation and verification capability evaluation and development in display module development projects. Validation and verification definition and background is studied in this research. Additionally, theories such as project management, system, organisational learning and causality is studied. Framework and key findings of this research are presented. Feedback system according of the framework is defined and implemented to the case company. This research is divided to the theory and empirical parts. Theory part is conducted in literature review. Empirical part is done in case study. Constructive methode and design research methode are used in this research A framework for capability evaluation and development was defined and developed as result of this research. Key findings of this study were that double loop learning approach with validation and verification V+ model enables defining a feedback reporting solution. Additional results, some minor changes in validation and verification process were proposed. There are a few concerns expressed on the results on validity and reliability of this study. The most important one was the selected research method and the selected model itself. The final state can be normative, the researcher may set study results before the actual study and in the initial state, the researcher may describe expectations for the study. Finally reliability of this study, and validity of this work are studied.
Resumo:
Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.
Resumo:
Tämän tutkimuksen tavoitteena oli määritellä strategiaprosessiin liittyvät kriittiset alueet, konsernijohdon tehtävät strategiaprosessissa sekä edellisten pohjalta kehittää konsernityyppiselle yritykselle normatiivinen strategiaprosessin malli kriittisten alueiden hallitsemiseksi. Tavoitteena oli myös lisätä strategisen ajattelun ja strategiaprosessien ymmärtämistä selittämällä niiden historiallista kehittymistä sekä niiden käsitteistöä ja käsitteiden sisältöä. Probleemaa lähestyttiin sekä doktriinin kautta että tulkitsemalla strategiaprosessissa ilmeneviä ongelmia ja analysoimalla niiden syy ja seuraussuhteita. Käsillä oleva teoreettis praktinen tutkimus toteutettiin osittain toiminta-analyyttisella tutkimusotteella, osittain toiminta analyyttisella tutkimusotteella case tutkimuksen ja komparatiivisen analyysin tukemana sekä osittain päätöksentekometodologisella tutkimusotteella. Työn teoreettinen osa tehtiin kirjallisuustutkimuksena. Siinä luotiin strategiaprosessin ja konsernijohtamisen käsitteellinen perusta ja tutkimuksen viitekehys. Konsernijohtaminen laajennettiin tutkimuksessa tulosten osalta yleistäen koskemaan muitakin hajautettuja yritysorganisaatioita kuin pelkän juridiikan pohjalta muodostuneita konserneja. Tutkimuksen aluksi tarkasteltiin strategisen ajattelun koulukuntia eri näkemyksineen sekä toisaalta strategia-ajattelun kehittymistrendeja 1950 luvulta nykyhetkeen. Samoin tarkasteltiin sitä, kuinka strategiaprosessit oval kehittyneet samara ajanjaksona. Huomion painopisteen todettiin siirtyneen strategisen johtamisen inhimilliseen puoleen strategisem johtajuuden samalla korostuessa ja strategisen ajattelun laajentuessa Empiirinen osuus toteutettiin case tutkimuksena. Sen kuluessa kartoitettiin strategiaprosessin keskeiset ongelma alueet ja analysoitiin niiden takana olevat syyt, jotta voidin määritellä strategiapmsessin kehittämisen suunnat ja painopisteaiueet. Teoreettisen ja empiirisen osan penisteella määriteltiin strategiaprosessin kriittiset alueet yleisellä tasolla. Kriittisellä alueella tarkoitetaan asiakokonaisuutta tai asiaa, jonka on oltava kunaossa, jotta strategiaprosessit toimisivat. Nämä alueet liittyvat itse strategiaprosessiin suoraan tai välillisesti muun johtamistyön kautta. Strategiaprosessin kriittisten alueiden määrittelyn yhteydessä asetettiin doktriiniin tukeutuen strategiaprosessin kehittämissuunnat konsernijohdon nakäkökulmasta tarkasteltuna. Näihin kehittämissuuntiin ja edelleen doktriiniin tukeutuen määriteltiin konsernijohdon strategiaprosessin substanssitehtävät, prosessia tukevat tehtävät sekä prosessin toteuttamis- ja kehittämistehtävtä. Konsernijohdon strategiaprosessin tehtävät eivät muodosta sekventiaalista ja hierarkista järjestelmää vaan ovat joukko aktiviteetteja, joita toteutetaan tarpeen mukaan. Konsernijohdon strategiaprosessi määriteltiin ja kuvattiin tutkimuksessa johdon työskentelyprosessiksi sellaisten toimeenpanokelpoisten strategioiden tuottamiseksi ja toimeenpanemiseksi, jotka lisäävät yrityksen (konsernin) arvoa omistajan näkökulmasta mutta huomioivat myös muiden keskeisten sidosryhmien vaatimukset, tavoitteet ja rajoitteet. Konsernijohdon strategiaprosessi nähdään tässä jatkuvana konsernitasoisena päämäärä- ja keinopuolen tarkasteluna. Siinä konsernijohto tiedostaa konsernin ulkoisesta ja sisäisestä ymparistostä tulevat signaalit sekä pitää yllä näkemystä konsernin strategisesta asemasta. Tiedon massan näkemyksen saavutettua kriittisen rajansa se pakottaa konsernijohdon aivioimaan aiempia ratkaisuja uudessa valossa. Tämä validointi perustuu jatkuvasti esitettyihin neljään kysymykseen: onko ympäristö , premissi ja toimeenpanoseurannasta kertyneen tietämyksen perusteella nähtävissä vaikutuksia välittömiin toimenpiteisiin, vaikutuksia toimintasuunnitelmiin tai kriittisiin seurannan kohteisiin, vaikutuksia suunnanvalintoihin tai vaikutuksia perususkomuksiin? Konsernijohdon strategiaprosessi etenee jatkuvana prosessina päätösten ja ajan virrassa.
Resumo:
Membrane bioreactors (MBRs) are a combination of activated sludge bioreactors and membrane filtration, enabling high quality effluent with a small footprint. However, they can be beset by fouling, which causes an increase in transmembrane pressure (TMP). Modelling and simulation of changes in TMP could be useful to describe fouling through the identification of the most relevant operating conditions. Using experimental data from a MBR pilot plant operated for 462days, two different models were developed: a deterministic model using activated sludge model n°2d (ASM2d) for the biological component and a resistance in-series model for the filtration component as well as a data-driven model based on multivariable regressions. Once validated, these models were used to describe membrane fouling (as changes in TMP over time) under different operating conditions. The deterministic model performed better at higher temperatures (>20°C), constant operating conditions (DO set-point, membrane air-flow, pH and ORP), and high mixed liquor suspended solids (>6.9gL-1) and flux changes. At low pH (<7) or periods with higher pH changes, the data-driven model was more accurate. Changes in the DO set-point of the aerobic reactor that affected the TMP were also better described by the data-driven model. By combining the use of both models, a better description of fouling can be achieved under different operating conditions