76 resultados para Interdependency
Resumo:
Enhanced immune responses for DNA and subunit vaccines potentiated by surfactant vesicle based delivery systems outlined in the present study, provides proof of principle for the beneficial aspects of vesicle mediated vaccination. The dehydration-rehydration technique was used to entrap plasmid DNA or subunit antigens into lipid-based (liposomes) or non-ionic surfactant-based (niosomes) dehydration-rehydration vesicles (DRV). Using this procedure, it was shown that both these types of antigens can be effectively entrapped in DRV liposomes and DRV niosomes. The vesicle size of DRV niosomes was shown to be twice the diameter (~2µm) of that of their liposome counterparts. Incorporation of cryoprotectants such as sucrose in the DRV procedure resulted in reduced vesicle sizes while retaining high DNA incorporation efficiency (~95%). Transfection studies in COS 7 cells demonstrated that the choice of cationic lipid, the helper lipid, and the method of preparation, all influenced transfection efficiency indicating a strong interdependency of these factors. This phenomenon has been further reinforced when 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE): cholesteryl 3b- [N-(N’ ,N’ -dimethylaminoethane)-carbamoyl] cholesterol (DC-Chol)/DNA complexes were supplemented with non-ionic surfactants. Morphological analysis of these complexes using transmission electron microscopy and environmental scanning electron microscopy (ESEM) revealed the presence of heterogeneous structures which may be essential for an efficient transfection in addition to the fusogenic properties of DOPE. In vivo evaluation of these DNA incorporated vesicle systems in BALB/c mice showed weak antibody and cell-mediated immune (CMI) responses. Subsequent mock challenge with hepatitis B antigen demonstrated that, 1-monopalmitoyl glycerol (MP) based DRV, is a more promising DNA vaccine adjuvant. Studying these DRV systems as adjuvants for the Hepatitis B subunit antigen (HBsAg) revealed a balanced antibody/CMI response profile on the basis of the HBsAg specific antibody and cytokine responses which were higher than unadjuvated antigen. The effect of addition of MP, cholesterol and trehalose 6,6’-dibehenate (TDB) on the stability and immuno-efficacy of dimethyldioctadecylammonium bromide (DDA) vesicles was investigated. Differential scanning calorimetry showed a reduction in transition temperature of DDA vesicles by ~12°C when incorporated with surfactants. ESEM of MP based DRV system indicated an increased vesicle stability upon incorporation of antigen. Adjuvant activity of these systems tested in C57BL/6j mice against three subunit antigens i.e., mycobacterial fusion protein- Ag85B-ESAT-6, and two malarial antigens - merozoite surface protein-1, (MSP1), and glutamate rich protein, (GLURP) revealed that while MP and DDA based systems induced comparable antibody responses, DDA based systems induced powerful CMI responses.
Resumo:
The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.
Resumo:
The increased data complexity and task interdependency associated with servitization represent significant barriers to its adoption. The outline of a business game is presented which demonstrates the increasing complexity of the management problem when moving through Base, Intermediate and Advanced levels of servitization. Linked data is proposed as an agile set of technologies, based on well established standards, for data exchange both in the game and more generally in supply chains.
Resumo:
Purpose: The servitization of manufacturing is a diverse and complex field of research interest. The purpose of this paper is to provide an integrative and organising lens for viewing the various contributions to knowledge production from those research communities addressing servitization. To achieve this, the paper aims to set out to address two principal questions, namely where are the knowledge stocks and flows amongst the research communities? And what are generic research concerns being addressed by these communities? Design/methodology/approach: Using an evidenced-based approach, the authors have performed a systematic review of the research literature associated with the servitization of manufacturing. This investigation incorporates a descriptive and thematic analysis of 148 academic and scholarly papers from 103 different lead authors in 68 international peer-reviewed journals. Findings: The work proposes support for the existence of distinct researcher communities, namely services marketing, service management, operations management, product-service systems and service science management and engineering, which are contributing to knowledge production in the servitization of manufacturing. Knowledge stocks within all communities associated with research in the servitization of manufacturing have dramatically increased since the mid-1990s. The trends clearly reveal that the operations community is in receipt of the majority of citations relating to the servitization of manufacturing. In terms of knowledge flows, it is apparent that the more mature communities are drawing on more locally produced knowledge stocks, whereas the emergent communities are drawing on a knowledge base more evenly distributed across all the communities. The results are indicative of varying degrees of interdependency amongst the communities. The generic research concerns being addressed within the communities are associated with the concepts of product-service differentiation, competitive strategy, customer value, customer relationships and product-service configuration. Originality/value: This research has further developed and articulated the identities of distinct researcher communities actively contributing to knowledge production in the servitization of manufacturing, and to what extent they are pursuing common research agendas. This study provides an improved descriptive and thematic awareness of the resulting body of knowledge, allowing the field of servitization to progress in a more informed and multidisciplinary fashion. © Emerald Group Publishing Limited.
Resumo:
The re-entrant flow shop scheduling problem (RFSP) is regarded as a NP-hard problem and attracted the attention of both researchers and industry. Current approach attempts to minimize the makespan of RFSP without considering the interdependency between the resource constraints and the re-entrant probability. This paper proposed Multi-level genetic algorithm (GA) by including the co-related re-entrant possibility and production mode in multi-level chromosome encoding. Repair operator is incorporated in the Multi-level genetic algorithm so as to revise the infeasible solution by resolving the resource conflict. With the objective of minimizing the makespan, Multi-level genetic algorithm (GA) is proposed and ANOVA is used to fine tune the parameter setting of GA. The experiment shows that the proposed approach is more effective to find the near-optimal schedule than the simulated annealing algorithm for both small-size problem and large-size problem. © 2013 Published by Elsevier Ltd.
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
This paper reflects a research project on the influence of online news media (from print, radio, and televised outlets) on disaster response. Coverage on the October 2010 Indonesian tsunami and earthquake was gathered from 17 sources from October 26 through November 30. This data was analyzed quantitatively with respect to coverage intensity over time and among outlets. Qualitative analyses were also conducted using keywords and value scale that assessed the degree of positivity or negativity associated with that keyword in the context of accountability. Results yielded insights into the influence of online media on actors' assumption of accountability and quality of response. It also provided information as to the optimal time window in which advocates and disaster management specialists can best present recommendations to improve policy and raise awareness. Coverage of outlets was analyzed individually, in groups, and as a whole, in order to discern behavior patterns for a better understanding of media interdependency. This project produced analytical insights but is primarily intended as a prototype for more refined and extensive research.
Resumo:
A substantial amount of work in the field of strategic management has attempted to explain the antecedents and outcomes of organizational learning. Though multinational corporations simultaneously engage in various types of tasks, activities, and strategies on a regular basis, the transfer of organizational learning in a multi-task context has largely remained under-explored in the literature. To inform our understanding in this area, this dissertation aimed at synthesizing findings from two parallel research streams of corporate development activities: strategic alliances and acquisitions. Structured in the form of two empirical studies, this dissertation examines: 1) the strategic outcomes of alliance experience of previously allying partners in terms of subsequent acquisition attempts, and 2) the performance implications of prior alliance experience for acquisitions. The first study draws on the relational view of inter-organizational governance to explain how various deal-specific and dyadic characteristics of a partnership relate to partnering firms' post-alliance acquisition attempts. This model theorizes on a variety of relational mechanisms to build a cohesive theory of inter-organizational exchanges in a multi-task setting where strategic alliances ultimately lead to a firm's decision to commit further resources. The second study applies organizational learning theory, and specifically examines whether frequency, recency, and relatedness of different dimensions of prior alliances, beyond the dyad-level experience, relate to an acquirer's superior post-acquisition performance. The hypotheses of the studies are tested using logistic and ordinary least square regressions, respectively. Results analyzed from a sample of cross-border alliance and acquisition deals attempted (for study I) and/or completed (for study II) during the period of 1991 to 2011 generally support the theory that relational exchange determines acquiring firms' post alliance acquisition behavior and that organizational routines and learning from prior alliances influence a future acquirer's financial performance. Overall, the empirical findings support our overarching theory of interdependency, and confirm the transfer effect of learning across these alternate, yet related corporate strategies of alliance and acquisition.^
Resumo:
An object based image analysis approach (OBIA) was used to create a habitat map of the Lizard Reef. Briefly, georeferenced dive and snorkel photo-transect surveys were conducted at different locations surrounding Lizard Island, Australia. For the surveys, a snorkeler or diver swam over the bottom at a depth of 1-2m in the lagoon, One Tree Beach and Research Station areas, and 7m depth in Watson's Bay, while taking photos of the benthos at a set height using a standard digital camera and towing a surface float GPS which was logging its track every five seconds. The camera lens provided a 1.0 m x 1.0 m footprint, at 0.5 m height above the benthos. Horizontal distance between photos was estimated by fin kicks, and corresponded to a surface distance of approximately 2.0 - 4.0 m. Approximation of coordinates of each benthic photo was done based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software (www.geospatialexperts.com). Coordinates of each photo were interpolated by finding the gps coordinates that were logged at a set time before and after the photo was captured. Dominant benthic or substrate cover type was assigned to each photo by placing 24 points random over each image using the Coral Point Count excel program (Kohler and Gill, 2006). Each point was then assigned a dominant cover type using a benthic cover type classification scheme containing nine first-level categories - seagrass high (>=70%), seagrass moderate (40-70%), seagrass low (<= 30%), coral, reef matrix, algae, rubble, rock and sand. Benthic cover composition summaries of each photo were generated automatically in CPCe. The resulting benthic cover data for each photo was linked to GPS coordinates, saved as an ArcMap point shapefile, and projected to Universal Transverse Mercator WGS84 Zone 56 South. The OBIA class assignment followed a hierarchical assignment based on membership rules with levels for "reef", "geomorphic zone" and "benthic community" (above).
Resumo:
The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.
Resumo:
Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.
Resumo:
A proteção das Infraestruturas Críticas tornou-se numa questão essencial no sistema internacional e nos Estados. Mais recentemente, Portugal começou a acompanhar esta tendência. Neste debate, torna-se de crucial importância, a identificação das infraestruturas que devem ser consideradas como críticas. Esta identificação terá como principal objetivo a redução das suas vulnerabilidades e a eficiência no emprego de recursos para a proteção das mesmas. Mas que critérios e indicadores, em cada setor/subsetor, possibilitam uma adequada metodologia para a identificação e caraterização das Infraestruturas Críticas em Portugal? Com vista a responder a esta problemática será analisada a metodologia adotada por Portugal, bem como as componentes da metodologia de identificação e caraterização de Infraestruturas Críticas utilizadas em países e organizações de referência. Esta investigação tem como objetivo geral identificar de áreas de melhoria na metodologia adotada pela Autoridade Nacional de Proteção Civil e, com base na análise da metodologia usada em organizações e países de referência, contribuir para a identificação e caraterização das IC em Portugal. Conclui-se que a Identificação e Caraterização de Infraestruturas Críticas nacionais deve ser aplicada na primeira fase do processo de elaboração do Programa Nacional de Proteção de Infraestruturas Críticas, apresentando, simultaneamente, uma definição de Infraestrutura Crítica, através de possíveis agrupamentos em setores, critérios e indicadores a adotar. Abstract: Critical infrastructure protection has become a key issue for states in the international system. Recently, Portugal has joined this trend. In this debate, the identification of structures to be considered critical infrastructure becomes crucial. This process of identification should have as key purpose the reduction of these infrastructures, and an efficient use of resources in protecting them. However, which criteria and indicators, for each sector/ sub-sector, allow for an adequate methodology for identifying and characterizing critical infrastructures in Portugal? In order to answer this, this research will analyse the methodology adopted by the National Civil Protection Authority, as well as some methodology components for identifying and characterizing critical infrastructure used by reference countries and organizations. The main purpose of this research is thus to contribute to the development of a methodology to be used in Portugal, through the development of criteria and indicators that prove adequate to identifying and characterizing Portuguese critical infrastructure. It concludes that the identification and characterization of national critical infrastructures should be applied in the first phase of elaborating a national program for the protection of critical infrastructures, while simultaneously presenting a definition of critical infrastructure, through possible grouping in sectors, criteria and indicators to adopt.
Resumo:
Doutoramento em Ciências Empresariais.
Resumo:
A substantial amount of work in the field of strategic management has attempted to explain the antecedents and outcomes of organizational learning. Though multinational corporations simultaneously engage in various types of tasks, activities, and strategies on a regular basis, the transfer of organizational learning in a multi-task context has largely remained under-explored in the literature. To inform our understanding in this area, this dissertation aimed at synthesizing findings from two parallel research streams of corporate development activities: strategic alliances and acquisitions. Structured in the form of two empirical studies, this dissertation examines: 1) the strategic outcomes of alliance experience of previously allying partners in terms of subsequent acquisition attempts, and 2) the performance implications of prior alliance experience for acquisitions. The first study draws on the relational view of inter-organizational governance to explain how various deal-specific and dyadic characteristics of a partnership relate to partnering firms’ post-alliance acquisition attempts. This model theorizes on a variety of relational mechanisms to build a cohesive theory of inter-organizational exchanges in a multi-task setting where strategic alliances ultimately lead to a firm’s decision to commit further resources. The second study applies organizational learning theory, and specifically examines whether frequency, recency, and relatedness of different dimensions of prior alliances, beyond the dyad-level experience, relate to an acquirer’s superior post-acquisition performance. The hypotheses of the studies are tested using logistic and ordinary least square regressions, respectively. Results analyzed from a sample of cross-border alliance and acquisition deals attempted (for study I) and/or completed (for study II) during the period of 1991 to 2011 generally support the theory that relational exchange determines acquiring firms’ post alliance acquisition behavior and that organizational routines and learning from prior alliances influence a future acquirer’s financial performance. Overall, the empirical findings support our overarching theory of interdependency, and confirm the transfer effect of learning across these alternate, yet related corporate strategies of alliance and acquisition.
Resumo:
Landscape, people and identity Landscape is about the interaction of a place or an area with people, which is reflected in the material interaction of people creating or shaping the landscape as well as in their mental perception, valuation and symbolic meaning of that landscape (Cosgrove 1998). This mutual and dynamic interaction forms the fundamental principle of the concept of landscape identity. Landscape identity has been described in scientific literature as a concept to bridge the physical, social and cultural aspects of landscapes. Also policy documents related with landscape and heritage (for example the UNESCO World Heritage Convention, the European Landscape Convention, the Faro Convention) are mentioning identity and landscape as key concepts. In those examples, landscape identity can refer to either the landscape itself - its features that makes the landscape unique (thus the landscape character), or to the social and personal construction. However, there is an interdependency between those two perspectives that needs to be conceptualised. Landscape identity is therefore defined as the multiple ways and dynamic relation between landscape and people (Loupa Ramos et al 2016).