966 resultados para Two-point boundary value problems
Resumo:
Wastewater-based epidemiology consists in acquiring relevant information about the lifestyle and health status of the population through the analysis of wastewater samples collected at the influent of a wastewater treatment plant. Whilst being a very young discipline, it has experienced an astonishing development since its firs application in 2005. The possibility to gather community-wide information about drug use has been among the major field of application. The wide resonance of the first results sparked the interest of scientists from various disciplines. Since then, research has broadened in innumerable directions. Although being praised as a revolutionary approach, there was a need to critically assess its added value, with regard to the existing indicators used to monitor illicit drug use. The main, and explicit, objective of this research was to evaluate the added value of wastewater-based epidemiology with regards to two particular, although interconnected, dimensions of illicit drug use. The first is related to trying to understand the added value of the discipline from an epidemiological, or societal, perspective. In other terms, to evaluate if and how it completes our current vision about the extent of illicit drug use at the population level, and if it can guide the planning of future prevention measures and drug policies. The second dimension is the criminal one, with a particular focus on the networks which develop around the large demand in illicit drugs. The goal here was to assess if wastewater-based epidemiology, combined to indicators stemming from the epidemiological dimension, could provide additional clues about the structure of drug distribution networks and the size of their market. This research had also an implicit objective, which focused on initiating the path of wastewater- based epidemiology at the Ecole des Sciences Criminelles of the University of Lausanne. This consisted in gathering the necessary knowledge about the collection, preparation, and analysis of wastewater samples and, most importantly, to understand how to interpret the acquired data and produce useful information. In the first phase of this research, it was possible to determine that ammonium loads, measured directly in the wastewater stream, could be used to monitor the dynamics of the population served by the wastewater treatment plant. Furthermore, it was shown that on the long term, the population did not have a substantial impact on consumption patterns measured through wastewater analysis. Focussing on methadone, for which precise prescription data was available, it was possible to show that reliable consumption estimates could be obtained via wastewater analysis. This allowed to validate the selected sampling strategy, which was then used to monitor the consumption of heroin, through the measurement of morphine. The latter, in combination to prescription and sales data, provided estimates of heroin consumption in line with other indicators. These results, combined to epidemiological data, highlighted the good correspondence between measurements and expectations and, furthermore, suggested that the dark figure of heroin users evading harm-reduction programs, which would thus not be measured by conventional indicators, is likely limited. In the third part, which consisted in a collaborative study aiming at extensively investigating geographical differences in drug use, wastewater analysis was shown to be a useful complement to existing indicators. In particular for stigmatised drugs, such as cocaine and heroin, it allowed to decipher the complex picture derived from surveys and crime statistics. Globally, it provided relevant information to better understand the drug market, both from an epidemiological and repressive perspective. The fourth part focused on cannabis and on the potential of combining wastewater and survey data to overcome some of their respective limitations. Using a hierarchical inference model, it was possible to refine current estimates of cannabis prevalence in the metropolitan area of Lausanne. Wastewater results suggested that the actual prevalence is substantially higher compared to existing figures, thus supporting the common belief that surveys tend to underestimate cannabis use. Whilst being affected by several biases, the information collected through surveys allowed to overcome some of the limitations linked to the analysis of cannabis markers in wastewater (i.e., stability and limited excretion data). These findings highlighted the importance and utility of combining wastewater-based epidemiology to existing indicators about drug use. Similarly, the fifth part of the research was centred on assessing the potential uses of wastewater-based epidemiology from a law enforcement perspective. Through three concrete examples, it was shown that results from wastewater analysis can be used to produce highly relevant intelligence, allowing drug enforcement to assess the structure and operations of drug distribution networks and, ultimately, guide their decisions at the tactical and/or operational level. Finally, the potential to implement wastewater-based epidemiology to monitor the use of harmful, prohibited and counterfeit pharmaceuticals was illustrated through the analysis of sibutramine, and its urinary metabolite, in wastewater samples. The results of this research have highlighted that wastewater-based epidemiology is a useful and powerful approach with numerous scopes. Faced with the complexity of measuring a hidden phenomenon like illicit drug use, it is a major addition to the panoply of existing indicators. -- L'épidémiologie basée sur l'analyse des eaux usées (ou, selon sa définition anglaise, « wastewater-based epidemiology ») consiste en l'acquisition d'informations portant sur le mode de vie et l'état de santé d'une population via l'analyse d'échantillons d'eaux usées récoltés à l'entrée des stations d'épuration. Bien qu'il s'agisse d'une discipline récente, elle a vécu des développements importants depuis sa première mise en oeuvre en 2005, notamment dans le domaine de l'analyse des résidus de stupéfiants. Suite aux retombées médiatiques des premiers résultats de ces analyses de métabolites dans les eaux usées, de nombreux scientifiques provenant de différentes disciplines ont rejoint les rangs de cette nouvelle discipline en développant plusieurs axes de recherche distincts. Bien que reconnu pour son coté objectif et révolutionnaire, il était nécessaire d'évaluer sa valeur ajoutée en regard des indicateurs couramment utilisés pour mesurer la consommation de stupéfiants. En se focalisant sur deux dimensions spécifiques de la consommation de stupéfiants, l'objectif principal de cette recherche était focalisé sur l'évaluation de la valeur ajoutée de l'épidémiologie basée sur l'analyse des eaux usées. La première dimension abordée était celle épidémiologique ou sociétale. En d'autres termes, il s'agissait de comprendre si et comment l'analyse des eaux usées permettait de compléter la vision actuelle sur la problématique, ainsi que déterminer son utilité dans la planification des mesures préventives et des politiques en matière de stupéfiants actuelles et futures. La seconde dimension abordée était celle criminelle, en particulier, l'étude des réseaux qui se développent autour du trafic de produits stupéfiants. L'objectif était de déterminer si cette nouvelle approche combinée aux indicateurs conventionnels, fournissait de nouveaux indices quant à la structure et l'organisation des réseaux de distribution ainsi que sur les dimensions du marché. Cette recherche avait aussi un objectif implicite, développer et d'évaluer la mise en place de l'épidémiologie basée sur l'analyse des eaux usées. En particulier, il s'agissait d'acquérir les connaissances nécessaires quant à la manière de collecter, traiter et analyser des échantillons d'eaux usées, mais surtout, de comprendre comment interpréter les données afin d'en extraire les informations les plus pertinentes. Dans la première phase de cette recherche, il y pu être mis en évidence que les charges en ammonium, mesurées directement dans les eaux usées permettait de suivre la dynamique des mouvements de la population contributrice aux eaux usées de la station d'épuration de la zone étudiée. De plus, il a pu être démontré que, sur le long terme, les mouvements de la population n'avaient pas d'influence substantielle sur le pattern de consommation mesuré dans les eaux usées. En se focalisant sur la méthadone, une substance pour laquelle des données précises sur le nombre de prescriptions étaient disponibles, il a pu être démontré que des estimations exactes sur la consommation pouvaient être tirées de l'analyse des eaux usées. Ceci a permis de valider la stratégie d'échantillonnage adoptée, qui, par le bais de la morphine, a ensuite été utilisée pour suivre la consommation d'héroïne. Combinée aux données de vente et de prescription, l'analyse de la morphine a permis d'obtenir des estimations sur la consommation d'héroïne en accord avec des indicateurs conventionnels. Ces résultats, combinés aux données épidémiologiques ont permis de montrer une bonne adéquation entre les projections des deux approches et ainsi démontrer que le chiffre noir des consommateurs qui échappent aux mesures de réduction de risque, et qui ne seraient donc pas mesurés par ces indicateurs, est vraisemblablement limité. La troisième partie du travail a été réalisée dans le cadre d'une étude collaborative qui avait pour but d'investiguer la valeur ajoutée de l'analyse des eaux usées à mettre en évidence des différences géographiques dans la consommation de stupéfiants. En particulier pour des substances stigmatisées, telles la cocaïne et l'héroïne, l'approche a permis d'objectiver et de préciser la vision obtenue avec les indicateurs traditionnels du type sondages ou les statistiques policières. Globalement, l'analyse des eaux usées s'est montrée être un outil très utile pour mieux comprendre le marché des stupéfiants, à la fois sous l'angle épidémiologique et répressif. La quatrième partie du travail était focalisée sur la problématique du cannabis ainsi que sur le potentiel de combiner l'analyse des eaux usées aux données de sondage afin de surmonter, en partie, leurs limitations. En utilisant un modèle d'inférence hiérarchique, il a été possible d'affiner les actuelles estimations sur la prévalence de l'utilisation de cannabis dans la zone métropolitaine de la ville de Lausanne. Les résultats ont démontré que celle-ci est plus haute que ce que l'on s'attendait, confirmant ainsi l'hypothèse que les sondages ont tendance à sous-estimer la consommation de cannabis. Bien que biaisés, les données récoltées par les sondages ont permis de surmonter certaines des limitations liées à l'analyse des marqueurs du cannabis dans les eaux usées (i.e., stabilité et manque de données sur l'excrétion). Ces résultats mettent en évidence l'importance et l'utilité de combiner les résultats de l'analyse des eaux usées aux indicateurs existants. De la même façon, la cinquième partie du travail était centrée sur l'apport de l'analyse des eaux usées du point de vue de la police. Au travers de trois exemples, l'utilisation de l'indicateur pour produire du renseignement concernant la structure et les activités des réseaux de distribution de stupéfiants, ainsi que pour guider les choix stratégiques et opérationnels de la police, a été mise en évidence. Dans la dernière partie, la possibilité d'utiliser cette approche pour suivre la consommation de produits pharmaceutiques dangereux, interdits ou contrefaits, a été démontrée par l'analyse dans les eaux usées de la sibutramine et ses métabolites. Les résultats de cette recherche ont mis en évidence que l'épidémiologie par l'analyse des eaux usées est une approche pertinente et puissante, ayant de nombreux domaines d'application. Face à la complexité de mesurer un phénomène caché comme la consommation de stupéfiants, la valeur ajoutée de cette approche a ainsi pu être démontrée.
Resumo:
This dissertation analyses the growing pool of copyrighted works, which are offered to the public using Creative Commons licensing. The study consist of analysis of the novel licensing system, the licensors, and the changes of the "all rights reserved" —paradigm of copyright law. Copyright law reserves all rights to the creator until seventy years have passed since her demise. Many claim that this endangers communal interests. Quite often the creators are willing to release some rights. This, however, is very difficult to do and needs help of specialized lawyers. The study finds that the innovative Creative Commons licensing scheme is well suited for low value - high volume licensing. It helps to reduce transaction costs on several le¬vels. However, CC licensing is not a "silver bullet". Privacy, moral rights, the problems of license interpretation and license compatibility with other open licenses and collecting societies remain unsolved. The study consists of seven chapters. The first chapter introduces the research topic and research questions. The second and third chapters inspect the Creative Commons licensing scheme's technical, economic and legal aspects. The fourth and fifth chapters examine the incentives of the licensors who use open licenses and describe certain open business models. The sixth chapter studies the role of collecting societies and whether two institutions, Creative Commons and collecting societies can coexist. The final chapter summarizes the findings. The dissertation contributes to the existing literature in several ways. There is a wide range of prior research on open source licensing. However, there is an urgent need for an extensive study of the Creative Commons licensing and its actual and potential impact on the creative ecosystem.
Resumo:
The article describes some concrete problems that were encountered when writing a two-level model of Mari morphology. Mari is an agglutinative Finno-Ugric language spoken in Russia by about 600 000 people. The work was begun in the 1980s on the basis of K. Koskenniemi’s Two-Level Morphology (1983), but in the latest stage R. Beesley’s and L. Karttunen’s Finite State Morphology (2003) was used. Many of the problems described in the article concern the inexplicitness of the rules in Mari grammars and the lack of information about the exact distribution of some suffixes, e.g. enclitics. The Mari grammars usually give complete paradigms for a few unproblematic verb stems, whereas the difficult or unclear forms of certain verbs are only superficially discussed. Another example of phenomena that are poorly described in grammars is the way suffixes with an initial sibilant combine to stems ending in a sibilant. The help of informants and searches from electronic corpora were used to overcome such difficulties in the development of the two-level model of Mari. The variation of the order of plural markers, case suffixes and possessive suffixes is a typical feature of Mari. The morphotactic rules constructed for Mari declensional forms tend to be recursive and their productivity must be limited by some technical device, such as filters. In the present model, certain plural markers were treated like nouns. The positional and functional versatility of the possessive suffixes can be regarded as the most challenging phenomenon in attempts to formalize the Mari morphology. Cyrillic orthography, which was used in the model, also caused problems. For instance, a Cyrillic letter may represent a sequence of two sounds, the first being part of the word stem while the other belongs to a suffix. In some cases, letters for voiced consonants are also generalized to represent voiceless consonants. Such orthographical conventions distance a morphological model based on orthography from the actual (morpho)phonological processes in the language.
Resumo:
Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.
Resumo:
This research report illustrates and examines new operation models for decreasing fixed costs and transforming them into variable costs in the field of paper industry. The report illustrates two cases – a new operation model for material logistics in maintenance and an examination of forklift truck fleet outsourcing solutions. Conventional material logistics in maintenance operation is illustrated and some problems related to conventional operation are identified. A new operation model that solves some of these problems is presented including descriptions of procurement and service contracts and sources of added value. Forklift truck fleet outsourcing solutions are examined by illustrating the responsibilities of a host company and a service provider both before and after outsourcing. The customer buys outsourcing services in order to improve its investment productivity. The mechanism of how these services affect the customer company’s investment productivity is illustrated.
Resumo:
By coupling the Boundary Element Method (BEM) and the Finite Element Method (FEM) an algorithm that combines the advantages of both numerical processes is developed. The main aim of the work concerns the time domain analysis of general three-dimensional wave propagation problems in elastic media. In addition, mathematical and numerical aspects of the related BE-, FE- and BE/FE-formulations are discussed. The coupling algorithm allows investigations of elastodynamic problems with a BE- and a FE-subdomain. In order to observe the performance of the coupling algorithm two problems are solved and their results compared to other numerical solutions.
Resumo:
A theory for the description of turbulent boundary layer flows over surfaces with a sudden change in roughness is considered. The theory resorts to the concept of displacement in origin to specify a wall function boundary condition for a kappa-epsilon model. An approximate algebraic expression for the displacement in origin is obtained from the experimental data by using the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963). This expression is subsequently included in the near wall logarithmic velocity profile, which is then adopted as a boundary condition for a kappa-epsilon modelling of the external flow. The results are compared with the lower atmospheric observations made by Bradley(Q. J. Roy. Meteo. Soc., vol. 94, pp. 361-379, 1968) as well as with velocity profiles extracted from a set of wind tunnel experiments carried out by Avelino et al.( 7th ENCIT, 1998). The measurements are found to be in good agreement with the theoretical computations. The skin-friction coefficient was calculated according to the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963) and to a balance of the integral momentum equation. In particular, the growth of the internal boundary layer thickness obtained from the numerical simulation is compared with predictions of the experimental data calculated by two methods, the "knee" point method and the "merge" point method.
Resumo:
In the present work we describe a method which allows the incorporation of surface tension into the GENSMAC2D code. This is achieved on two scales. First on the scale of a cell, the surface tension effects are incorporated into the free surface boundary conditions through the computation of the capillary pressure. The required curvature is estimated by fitting a least square circle to the free surface using the tracking particles in the cell and in its close neighbors. On a sub-cell scale, short wavelength perturbations are filtered out using a local 4-point stencil which is mass conservative. An efficient implementation is obtained through a dual representation of the cell data, using both a matrix representation, for ease at identifying neighbouring cells, and also a tree data structure, which permits the representation of specific groups of cells with additional information pertaining to that group. The resulting code is shown to be robust, and to produce accurate results when compared with exact solutions of selected fluid dynamic problems involving surface tension.
Resumo:
The aim of this study was to investigate the diagnosis delay and its impact on the stage of disease. The study also evaluated a nuclear DNA content, immunohistochemical expression of Ki-67 and bcl-2, and the correlation of these biological features with the clinicopathological features and patient outcome. 200 Libyan women, diagnosed during 2008–2009 were interviewed about the period from the first symptoms to the final histological diagnosis of breast cancer. Also retrospective preclinical and clinical data were collected from medical records on a form (questionnaire) in association with the interview. Tumor material of the patients was collected and nuclear DNA content analysed using DNA image cytometry. The expression of Ki-67 and bcl-2 were assessed using immunohistochemistry (IHC). The studies described in this thesis show that the median of diagnosis time for women with breast cancer was 7.5 months and 56% of patients were diagnosed within a period longer than 6 months. Inappropriate reassurance that the lump was benign was an important reason for prolongation of the diagnosis time. Diagnosis delay was also associated with initial breast symptom(s) that did not include a lump, old age, illiteracy, and history of benign fibrocystic disease. The patients who showed diagnosis delay had bigger tumour size (p<0.0001), positive lymph nodes (p<0.0001), and high incidence of late clinical stages (p<0.0001). Biologically, 82.7% of tumors were aneuploid and 17.3% were diploid. The median SPF of tumors was 11% while the median positivity of Ki-67 was 27.5%. High Ki-67 expression was found in 76% of patients, and high SPF values in 56% of patients. Positive bcl-2 expression was found in 62.4% of tumors. 72.2% of the bcl-2 positive samples were ER-positive. Patients who had tumor with DNA aneuploidy, high proliferative activity and negative bcl-2 expression were associated with a high grade of malignancy and short survival. The SPF value is useful cell proliferation marker in assessing prognosis, and the decision cut point of 11% for SPF in the Libyan material was clearly significant (p<0.0001). Bcl-2 is a powerful prognosticator and an independent predictor of breast cancer outcome in the Libyan material (p<0.0001). Libyan breast cancer was investigated in these studies from two different aspects: health services and biology. The results show that diagnosis delay is a very serious problem in Libya and is associated with complex interactions between many factors leading to advanced stages, and potentially to high mortality. Cytometric DNA variables, proliferative markers (Ki-67 and SPF), and oncoprotein bcl-2 negativity reflect the aggressive behavior of Libyan breast cancer and could be used with traditional factors to predict the outcome of individual patients, and to select appropriate therapy.
Resumo:
Fireside deposits can be found in many types of utility and industrial furnaces. The deposits in furnaces are problematic because they can reduce heat transfer, block gas paths and cause corrosion. To tackle these problems, it is vital to estimate the influence of deposits on heat transfer, to minimize deposit formation and to optimize deposit removal. It is beneficial to have a good understanding of the mechanisms of fireside deposit formation. Numerical modeling is a powerful tool for investigating the heat transfer in furnaces, and it can provide valuable information for understanding the mechanisms of deposit formation. In addition, a sub-model of deposit formation is generally an essential part of a comprehensive furnace model. This work investigates two specific processes of fireside deposit formation in two industrial furnaces. The first process is the slagging wall found in furnaces with molten deposits running on the wall. A slagging wall model is developed to take into account the two-layer structure of the deposits. With the slagging wall model, the thickness and the surface temperature of the molten deposit layer can be calculated. The slagging wall model is used to predict the surface temperature and the heat transfer to a specific section of a super-heater tube panel with the boundary condition obtained from a Kraft recovery furnace model. The slagging wall model is also incorporated into the computational fluid dynamics (CFD)-based Kraft recovery furnace model and applied on the lower furnace walls. The implementation of the slagging wall model includes a grid simplification scheme. The wall surface temperature calculated with the slagging wall model is used as the heat transfer boundary condition. Simulation of a Kraft recovery furnace is performed, and it is compared with two other cases and measurements. In the two other cases, a uniform wall surface temperature and a wall surface temperature calculated with a char bed burning model are used as the heat transfer boundary conditions. In this particular furnace, the wall surface temperatures from the three cases are similar and are in the correct range of the measurements. Nevertheless, the wall surface temperature profiles with the slagging wall model and the char bed burning model are different because the deposits are represented differently in the two models. In addition, the slagging wall model is proven to be computationally efficient. The second process is deposit formation due to thermophoresis of fine particles to the heat transfer surface. This process is considered in the simulation of a heat recovery boiler of the flash smelting process. In order to determine if the small dust particles stay on the wall, a criterion based on the analysis of forces acting on the particle is applied. Time-dependent simulation of deposit formation in the heat recovery boiler is carried out and the influence of deposits on heat transfer is investigated. The locations prone to deposit formation are also identified in the heat recovery boiler. Modeling of the two processes in the two industrial furnaces enhances the overall understanding of the processes. The sub-models developed in this work can be applied in other similar deposit formation processes with carefully-defined boundary conditions.
Resumo:
Cross-sector collaboration and partnerships have become an emerging and desired strategy in addressing huge social and environmental challenges. Despite its popularity, cross-sector collaboration management has proven to be very challenging. Even though cross-sector collaboration and partnership management have been widely studied and discussed in recent years, their effectiveness as well as their ability to create value with respect to the problems they address has remained very challenging. There is little or no evidence of their ability to create value. Regarding all these challenges, this study aims to explore how to manage cross-sector collaborations and partnerships to be able to improve their effectiveness and to create more value for all partners involved in collaboration as well as for customers. The thesis is divided into two parts. The first part comprises an overview of relevant literature (including strategic management, value networks and value creation theories), followed by presenting the results of the whole thesis and the contribution made by the study. The second part consists of six research publications, including both quantitative and qualitative studies. The chosen research strategy is triangulation, as the study includes four types of triangulation: (1) theoretical triangulation, (2) methodological triangulation, (3) data triangulation and (4) researcher triangulation. Two publications represent conceptual development, which are based on secondary data research. One publication is a quantitative study, carried out through a survey. The other three publications represent qualitative studies, based on case studies, where data was collected through interviews and workshops, with participation of managers from all three sectors: public, private and the third (nonprofit). The study consolidates the field of “strategic management of value networks,” which is proposed to be applied in the context of cross-sector collaboration and partnerships, with the aim of increasing their effectiveness and the process of value creation. Furthermore, the study proposes a first definition for the strategic management of value networks. The study also proposes and develops two strategy tools that are recommended to be used for the strategic management of value networks in cross-sector collaboration and partnerships. Taking a step forward, the study implements the strategy tools in practice, aiming to show and to demonstrate how new value can be created by using the developed strategy tools for the strategic management of value networks. This study makes four main contributions. (1) First, it brings a theoretical contribution by providing new insights and consolidating the field of strategic management of value networks, also proposing a first definition for the strategic management of value networks. (2) Second, the study makes a methodical contribution by proposing and developing two strategy tools for value networks of cross-sector collaboration: (a) value network mapping, a method that allows us to assess the current and the potential value network and (b) the Value Network Scorecard, a method of performance measurement and performance prediction in cross-sector collaboration. (3) Third, the study has managerial implications, offering new solutions and empirical evidence on how to increase the effectiveness of cross-sector collaboration and also allow managers to understand how new value can be created in cross-sector partnerships and how to get the full potential of collaboration. (4) And fourth, the study also has practical implications, allowing managers to understand how to use in practice the strategy tools developed in this study, providing discussions on the limitations regarding the proposed tools as well as general limitations involved in the study.
Resumo:
Toxic cyanobacteria in drinking water supplies can cause serious public health problems. In the present study we analyzed the time course of changes in lung histology in young and adult male Swiss mice injected intraperitoneally (ip) with a cyanobacterial extract containing the hepatotoxic microcystins. Microcystins are cyclical heptapeptides quantified by ELISA method. Ninety mice were divided into two groups. Group C received an injection of saline (300 µl, ip) and group Ci received a sublethal dose of microcystins (48.2 µg/kg, ip). Mice of the Ci group were further divided into young (4 weeks old) and adult (12 weeks old) animals. At 2 and 8 h and at 1, 2, 3, and 4 days after the injection of the toxic cyanobacterial extract, the mice were anesthetized and the trachea was occluded at end-expiration. The lungs were removed en bloc, fixed, sectioned, and stained with hematoxylin-eosin. The percentage of the area of alveolar collapse and the number of polymorphonuclear (PMN) and mononuclear cell infiltrations were determined by point counting. Alveolar collapse increased from C to all Ci groups (123 to 262%) independently of time, reaching a maximum value earlier in young than in adult animals. The amount of PMN cells increased with time of the lesion (52 to 161%). The inflammatory response also reached the highest level earlier in young than in adult mice. After 2 days, PMN levels remained unchanged in adult mice, while in young mice the maximum number was observed at day 1 and was similar at days 2, 3, and 4. We conclude that the toxins and/or other cyanobacterial compounds probably exert these effects by reaching the lung through the blood stream after ip injection.
Resumo:
cDNA coding for two digestive lysozymes (MdL1 and MdL2) of the Musca domestica housefly was cloned and sequenced. MdL2 is a novel minor lysozyme, whereas MdL1 is the major lysozyme thus far purified from M. domestica midgut. MdL1 and MdL2 were expressed as recombinant proteins in Pichia pastoris, purified and characterized. The lytic activities of MdL1 and MdL2 upon Micrococcus lysodeikticus have an acidic pH optimum (4.8) at low ionic strength (μ = 0.02), which shifts towards an even more acidic value, pH 3.8, at a high ionic strength (μ = 0.2). However, the pH optimum of their activities upon 4-methylumbelliferyl N-acetylchitotrioside (4.9) is not affected by ionic strength. These results suggest that the acidic pH optimum is an intrinsic property of MdL1 and MdL2, whereas pH optimum shifts are an effect of the ionic strength on the negatively charged bacterial wall. MdL2 affinity for bacterial cell wall is lower than that of MdL1. Differences in isoelectric point (pI) indicate that MdL2 (pI = 6.7) is less positively charged than MdL1 (pI = 7.7) at their pH optima, which suggests that electrostatic interactions might be involved in substrate binding. In agreement with that finding, MdL1 and MdL2 affinities for bacterial cell wall decrease as ionic strength increases.
Resumo:
The aim of this study is to investigate value added service concept for an asset and real estate management case company. The initial purpose was to recognize the most value adding key performance indicators (KPIs) information delivered for its customers, real estate investors with value added service. The multiple case study strategy included two focus group interviews with five case interviews in total. Additionally, quality function deployment (QFD) was used in order to form up the service process. The study starts with introduction and methodology explaining the demand for the thesis study. The subsequent chapter presents the theoretical background on real estate management KPIs in four main points of views and quality function deployment from the service development point of view. The chapter also defines research gap for the case study. According to the case study interviews, the most favored KPIs to deliver for the clients are income maturity of lease agreements and leasing activity. These KPIs and quality characteristics are translated into the QFD. In total, the service QFD explains the service planning, process control, and action plan phases.
Resumo:
The context of this study is corporate e-learning, with an explicit focus on how digital learning design can facilitate self-regulated learning (SRL). The field of e-learning is growing rapidly. An increasing number of corporations use digital technology and elearning for training their work force and customers. E-learning may offer economic benefits, as well as opportunities for interaction and communication that traditional teaching cannot provide. However, the evolving variety of digital learning contexts makes new demands on learners, requiring them to develop strategies to adapt and cope with novel learning tools. This study derives from the need to learn more about learning experiences in digital contexts in order to be able to design these properly for learning. The research question targets how the design of an e-learning course influences participants’ self-regulated learning actions and intentions. SRL involves learners’ ability to exercise agency in their learning. Micro-level SRL processes were targeted by exploring behaviour, cognition, and affect/motivation in relation to the design of the digital context. Two iterations of an e-learning course were tested on two groups of participants (N=17). However, the exploration of SRL extends beyond the educational design research perspective of comparing the effects of the changes to the course designs. The study was conducted in a laboratory with each participant individually. Multiple types of data were collected. However, the results presented in this thesis are based on screen observations (including eye tracking) and video-stimulated recall interviews. These data were integrated in order to achieve a broad perspective on SRL. The most essential change evident in the second course iteration was the addition of feedback during practice and the final test. Without feedback on actions there was an observable difference between those who were instruction-directed and those who were self-directed in manipulating the context and, thus, persisted whenever faced with problems. In the second course iteration, including the feedback, this kind of difference was not found. Feedback provided the tipping point for participants to regulate their learning by identifying their knowledge gaps and to explore the learning context in a targeted manner. Furthermore, the course content was consistently seen from a pragmatic perspective, which influenced the participants’ choice of actions, showing that real life relevance is an important need of corporate learners. This also relates to assessment and the consideration of its purpose in relation to participants’ work situation. The rigidity of the multiple choice questions, focusing on the memorisation of details, influenced the participants to adapt to an approach for surface learning. It also caused frustration in cases where the participants’ epistemic beliefs were incompatible with this kind of assessment style. Triggers of positive and negative emotions could be categorized into four levels: personal factors, instructional design of content, interface design of context, and technical solution. In summary, the key design choices for creating a positive learning experience involve feedback, flexibility, functionality, fun, and freedom. The design of the context impacts regulation of behaviour, cognition, as well as affect and motivation. The learners’ awareness of these areas of regulation in relation to learning in a specific context is their ability for design-based epistemic metareflection. I describe this metareflection as knowing how to manipulate the context behaviourally for maximum learning, being metacognitively aware of one’s learning process, and being aware of how emotions can be regulated to maintain volitional control of the learning situation. Attention needs to be paid to how the design of a digital learning context supports learners’ metareflective development as digital learners. Every digital context has its own affordances and constraints, which influence the possibilities for micro-level SRL processes. Empowering learners in developing their ability for design-based epistemic metareflection is, therefore, essential for building their digital literacy in relation to these affordances and constraints. It was evident that the implementation of e-learning in the workplace is not unproblematic and needs new ways of thinking about learning and how we create learning spaces. Digital contexts bring a new culture of learning that demands attitude change in how we value knowledge, measure it, define who owns it, and who creates it. Based on the results, I argue that digital solutions for corporate learning ought to be built as an integrated system that facilitates socio-cultural connectivism within the corporation. The focus needs to shift from designing static e-learning material to managing networks of social meaning negotiation as part of a holistic corporate learning ecology.