781 resultados para Process-dissociation Framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Design/methodology/approach – The purpose of this paper is to explore and investigate business model design. The research followed a deductive structured qualitative content analysis approach utilizing a predetermined categorization matrix. The analysis of forty business cases uncovered commonalities of key strategic drivers behind these innovative business models. Findings – Five business model typologies were derived from this content analysis, from which quick prototypes of new business models can be created. Research limitations/implications – Implications from this research suggest there is no “one right” model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage. Originality/value – This paper builds upon the emerging research and exploration into the importance and relevance of dynamic, design-driven approaches to the creation of innovative business models. These models aim to synthesize knowledge gained from real world examples into a tangible, accessible and provoking framework that provide new prototyping templates to aid the process of business model experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the thesis it is discussed in what ways concepts and methodology developed in evolutionary biology can be applied to the explanation and research of language change. The parallel nature of the mechanisms of biological evolution and language change is explored along with the history of the exchange of ideas between these two disciplines. Against this background computational methods developed in evolutionary biology are taken into consideration in terms of their applicability to the study of historical relationships between languages. Different phylogenetic methods are explained in common terminology, avoiding the technical language of statistics. The thesis is on one hand a synthesis of earlier scientific discussion, and on the other an attempt to map out the problems of earlier approaches in addition to finding new guidelines in the study of language change on their basis. Primarily literature about the connections between evolutionary biology and language change, along with research articles describing applications of phylogenetic methods into language change have been used as source material. The thesis starts out by describing the initial development of the disciplines of evolutionary biology and historical linguistics, a process which right from the beginning can be seen to have involved an exchange of ideas concerning the mechanisms of language change and biological evolution. The historical discussion lays the foundation for the handling of the generalised account of selection developed during the recent few decades. This account is aimed for creating a theoretical framework capable of explaining both biological evolution and cultural change as selection processes acting on self-replicating entities. This thesis focusses on the capacity of the generalised account of selection to describe language change as a process of this kind. In biology, the mechanisms of evolution are seen to form populations of genetically related organisms through time. One of the central questions explored in this thesis is whether selection theory makes it possible to picture languages are forming populations of a similar kind, and what a perspective like this can offer to the understanding of language in general. In historical linguistics, the comparative method and other, complementing methods have been traditionally used to study the development of languages from a common ancestral language. Computational, quantitative methods have not become widely used as part of the central methodology of historical linguistics. After the fading of a limited popularity enjoyed by the lexicostatistical method since the 1950s, only in the recent years have also the computational methods of phylogenetic inference used in evolutionary biology been applied to the study of early language history. In this thesis the possibilities offered by the traditional methodology of historical linguistics and the new phylogenetic methods are compared. The methods are approached through the ways in which they have been applied to the Indo-European languages, which is the most thoroughly investigated language family using both the traditional and the phylogenetic methods. The problems of these applications along with the optimal form of the linguistic data used in these methods are explored in the thesis. The mechanisms of biological evolution are seen in the thesis as parallel in a limited sense to the mechanisms of language change, however sufficiently so that the development of a generalised account of selection is deemed as possibly fruiful for understanding language change. These similarities are also seen to support the validity of using phylogenetic methods in the study of language history, although the use of linguistic data and the models of language change employed by these models are seen to await further development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space in musical semiosis is a study of musical meaning, spatiality and composition. Earlier studies on musical composition have not adequately treated the problems of musical signification. Here, composition is considered an epitomic process of musical signification. Hence the core problems of composition theory are core problems of musical semiotics. The study employs a framework of naturalist pragmatism, based on C. S. Peirce’s philosophy. It operates on concepts such as subject, experience, mind and inquiry, and incorporates relevant ideas of Aristotle, Peirce and John Dewey into a synthetic view of esthetic, practic, and semiotic for the benefit of grasping musical signification process as a case of semiosis in general. Based on expert accounts, music is depicted as real, communicative, representational, useful, embodied and non-arbitrary. These describe how music and the musical composition process are mental processes. Peirce’s theories are combined with current morphological theories of cognition into a view of mind, in which space is central. This requires an analysis of space, and the acceptance of a relativist understanding of spatiality. This approach to signification suggests that mental processes are spatially embodied, by virtue of hard facts of the world, literal representations of objects, as well as primary and complex metaphors each sharing identities of spatial structures. Consequently, music and the musical composition process are spatially embodied. Composing music appears as a process of constructing metaphors—as a praxis of shaping and reshaping features of sound, representable from simple quality dimensions to complex domains. In principle, any conceptual space, metaphorical or literal, may set off and steer elaboration, depending on the practical bearings on the habits of feeling, thinking and action, induced in musical communication. In this sense, it is evident that music helps us to reorganize our habits of feeling, thinking, and action. These habits, in turn, constitute our existence. The combination of Peirce and morphological approaches to cognition serves well for understanding musical and general signification. It appears both possible and worthwhile to address a variety of issues central to musicological inquiry in the framework of naturalist pragmatism. The study may also contribute to the development of Peircean semiotics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to find a framework for a holistic approach to, and form a conceptual toolbox for, investigating changes in signs and in their interpretation. Charles S. Peirce s theory of signs in a communicative perspective is taken as a basis for the framework. The concern directing the study is the problem of a missing framework in analysing signs of visual artefacts from a holistic perspective as well as that of the missing conceptual tools. To discover the possibility of such a holistic approach to semiosic processes and to form a conceptual toolbox the following issues are discussed: i) how the many Objects with two aspects involved in Peirce s definition of sign-action, promote multiple semiosis arising from the same sign by the same Interpretant depending on the domination of the Objects; ii) in which way can the relation of the individual and society or group be made more apparent in the construction of the self since this construction is intertwined with the process of meaning-creation and interpretation; iii) how to account for the fundamental role of emotions in semiosis, and the relation of emotions with the often neglected topic of embodiment; iv) how to take into account the dynamic, mediating and processual nature of sign-action in analysing and understanding the changes in signs and in the interpretation of signs. An interdisciplinary approach is chosen for this dissertation. Concepts that developed within social psychology, developmental psychology, neurosciences and semiotics, are discussed. The common aspect of the approaches is that they in one way or another concentrate on mediation provided by signs in explaining human activity and cognition. The holistic approach and conceptual toolbox found are employed in a case study. This consists of an analysis of beer brands including a comparison of brands from two different cultures. It becomes clear that different theories and approaches have mutual affinities and do complement each other. In addition, the affinities in different disciplines somewhat provide credence to the various views. From the combined approach described, it becomes apparent that by the semiosic process, the emerging semiotic self intertwined with the Umwelt, including emotions, can be described. Seeing the interpretation and meaning-making through semiosis allows for the analysis of groups, taking into account the embodied and emotional component. It is concluded that emotions have a crucial role in all human activity, including so-called reflective thinking, and that emotions and embodiment should be consciously taken into account in analysing signs, the interpretation, and in changes of signs and interpretations from both the social and individual level. The analysis of the beer labels expresses well the intertwined nature of the relationship between signs, individual consumers and society. Many direct influences from society on the label design are found, and also some indirect attitude changes that become apparent from magazines, company reports, etc. In addition, the analysis brings up the issues of the unifying tendency of the visual artefacts of different cultures, but also demonstrates that the visual artefacts are able to hold the local signs and meanings, and sometimes are able to represent the local meanings although the signs have changed in the unifying process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis studies the translation process for the laws of Finland as they are translated from Finnish into Swedish. The focus is on revision practices, norms and workplace procedures. The translation process studied covers three institutions and four revisions. In three separate studies the translation process is analyzed from the perspective of the translations, the institutions and the actors. The general theoretical framework is Descriptive Translation Studies. For the analysis of revisions made in versions of the Swedish translation of Finnish laws, a model is developed covering five grammatical categories (textual revisions, syntactic revisions, lexical revisions, morphological revisions and content revisions) and four norms (legal adequacy, correct translation, correct language and readability). A separate questionnaire-based study was carried out with translators and revisers at the three institutions. The results show that the number of revisions does not decrease during the translation process, and no division of labour can be seen at the different stages. This is somewhat surprising if the revision process is regarded as one of quality control. Instead, all revisers make revisions on every level of the text. Further, the revisions do not necessarily imply errors in the translations but are often the result of revisers following different norms for legal translation. The informal structure of the institutions and its impact on communication, visibility and workplace practices was studied from the perspective of organization theory. The results show weaknesses in the communicative situation, which affect the co-operation both between institutions and individuals. Individual attitudes towards norms and their relative authority also vary, in the sense that revisers largely prioritize legal adequacy whereas translators give linguistic norms a higher value. Further, multi-professional teamwork in the institutions studied shows a kind of teamwork based on individuals and institutions doing specific tasks with only little contact with others. This shows that the established definitions of teamwork, with people co-working in close contact with each other, cannot directly be applied to the workplace procedures in the translation process studied. Three new concepts are introduced: flerstegsrevidering (multi-stage revision), revideringskedja (revision chain) and normsyn (norm attitude). The study seeks to make a contribution to our knowledge of legal translation, translation processes, institutional translation, revision practices and translation norms for legal translation. Keywords: legal translation, translation of laws, institutional translation, revision, revision practices, norms, teamwork, organizational informal structure, translation process, translation sociology, multilingual.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research contributes a formal framework to evaluate whether existing CMFs can model and reason about various types of normative requirements. The framework can be used to determine the level of coverage of concepts provided by CMFs, establish mappings between CMF languages and the semantics for the normative concepts and evaluate the suitability of a CMF for issuing a certification of compliance. The developed framework is independent of any specific formalism and it has been formally defined and validated through the examples of such mappings of CMFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Packet forwarding is a memory-intensive application requiring multiple accesses through a trie structure. With the requirement to process packets at line rates, high-performance routers need to forward millions of packets every second with each packet needing up to seven memory accesses. Earlier work shows that a single cache for the nodes of a trie can reduce the number of external memory accesses. It is observed that the locality characteristics of the level-one nodes of a trie are significantly different from those of lower level nodes. Hence, we propose a heterogeneously segmented cache architecture (HSCA) which uses separate caches for level-one and lower level nodes, each with carefully chosen sizes. Besides reducing misses, segmenting the cache allows us to focus on optimizing the more frequently accessed level-one node segment. We find that due to the nonuniform distribution of nodes among cache sets, the level-one nodes cache is susceptible t high conflict misses. We reduce conflict misses by introducing a novel two-level mapping-based cache placement framework. We also propose an elegant way to fit the modified placement function into the cache organization with minimal increase in access time. Further, we propose an attribute preserving trace generation methodology which emulates real traces and can generate traces with varying locality. Performanc results reveal that our HSCA scheme results in a 32 percent speedup in average memory access time over a unified nodes cache. Also, HSC outperforms IHARC, a cache for lookup results, with as high as a 10-fold speedup in average memory access time. Two-level mappin further enhances the performance of the base HSCA by up to 13 percent leading to an overall improvement of up to 40 percent over the unified scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

- Objectives Falls are the most frequent adverse event reported in hospitals. Patient and staff education delivered by trained educators significantly reduced falls and injurious falls in an older rehabilitation population. The purpose of the study was to explore the educators’ perspectives of delivering the education and to conceptualise how the programme worked to prevent falls among older patients who received the education. - Design A qualitative exploratory study. - Methods Data were gathered from three sources: conducting a focus group and an interview (n=10 educators), written educator notes and reflective researcher field notes based on interactions with the educators during the primary study. The educators delivered the programme on eight rehabilitation wards for periods of between 10 and 40 weeks. They provided older patients with individualised education to engage in falls prevention and provided staff with education to support patient actions. Data were thematically analysed and presented using a conceptual framework. - Results Falls prevention education led to mutual understanding between staff and patients which assisted patients to engage in falls prevention behaviours. Mutual understanding was derived from the following observations: the educators perceived that they could facilitate an effective three-way interaction between staff actions, patient actions and the ward environment which led to behaviour change on the wards. This included engaging with staff and patients, and assisting them to reconcile differing perspectives about falls prevention behaviours. - Conclusions Individualised falls prevention education effectively provides patients who receive it with the capability and motivation to develop and undertake behavioural strategies that reduce their falls, if supported by staff and the ward environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reuse of existing carefully designed and tested software improves the quality of new software systems and reduces their development costs. Object-oriented frameworks provide an established means for software reuse on the levels of both architectural design and concrete implementation. Unfortunately, due to frame-works complexity that typically results from their flexibility and overall abstract nature, there are severe problems in using frameworks. Patterns are generally accepted as a convenient way of documenting frameworks and their reuse interfaces. In this thesis it is argued, however, that mere static documentation is not enough to solve the problems related to framework usage. Instead, proper interactive assistance tools are needed in order to enable system-atic framework-based software production. This thesis shows how patterns that document a framework s reuse interface can be represented as dependency graphs, and how dynamic lists of programming tasks can be generated from those graphs to assist the process of using a framework to build an application. This approach to framework specialization combines the ideas of framework cookbooks and task-oriented user interfaces. Tasks provide assistance in (1) cre-ating new code that complies with the framework reuse interface specification, (2) assuring the consistency between existing code and the specification, and (3) adjusting existing code to meet the terms of the specification. Besides illustrating how task-orientation can be applied in the context of using frameworks, this thesis describes a systematic methodology for modeling any framework reuse interface in terms of software patterns based on dependency graphs. The methodology shows how framework-specific reuse interface specifi-cations can be derived from a library of existing reusable pattern hierarchies. Since the methodology focuses on reusing patterns, it also alleviates the recog-nized problem of framework reuse interface specification becoming complicated and unmanageable for frameworks of realistic size. The ideas and methods proposed in this thesis have been tested through imple-menting a framework specialization tool called JavaFrames. JavaFrames uses role-based patterns that specify a reuse interface of a framework to guide frame-work specialization in a task-oriented manner. This thesis reports the results of cases studies in which JavaFrames and the hierarchical framework reuse inter-face modeling methodology were applied to the Struts web application frame-work and the JHotDraw drawing editor framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In visual object detection and recognition, classifiers have two interesting characteristics: accuracy and speed. Accuracy depends on the complexity of the image features and classifier decision surfaces. Speed depends on the hardware and the computational effort required to use the features and decision surfaces. When attempts to increase accuracy lead to increases in complexity and effort, it is necessary to ask how much are we willing to pay for increased accuracy. For example, if increased computational effort implies quickly diminishing returns in accuracy, then those designing inexpensive surveillance applications cannot aim for maximum accuracy at any cost. It becomes necessary to find trade-offs between accuracy and effort. We study efficient classification of images depicting real-world objects and scenes. Classification is efficient when a classifier can be controlled so that the desired trade-off between accuracy and effort (speed) is achieved and unnecessary computations are avoided on a per input basis. A framework is proposed for understanding and modeling efficient classification of images. Classification is modeled as a tree-like process. In designing the framework, it is important to recognize what is essential and to avoid structures that are narrow in applicability. Earlier frameworks are lacking in this regard. The overall contribution is two-fold. First, the framework is presented, subjected to experiments, and shown to be satisfactory. Second, certain unconventional approaches are experimented with. This allows the separation of the essential from the conventional. To determine if the framework is satisfactory, three categories of questions are identified: trade-off optimization, classifier tree organization, and rules for delegation and confidence modeling. Questions and problems related to each category are addressed and empirical results are presented. For example, related to trade-off optimization, we address the problem of computational bottlenecks that limit the range of trade-offs. We also ask if accuracy versus effort trade-offs can be controlled after training. For another example, regarding classifier tree organization, we first consider the task of organizing a tree in a problem-specific manner. We then ask if problem-specific organization is necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Batch Processing Machine (BPM) is one which processes a number of jobs simultaneously as a batch with common beginning and ending times. Also, a BPM, once started cannot be interrupted in between (Pre-emption not allowed). This research is motivated by a BPM in steel casting industry. There are three main stages in any steel casting industry viz., pre-casting stage, casting stage and post-casting stage. A quick overview of the entire process, is shown in Figure 1. There are two BPMs : (1) Melting furnace in the pre-casting stage and (2) Heat Treatment Furnace (HTF) in the post casting stage of steel casting manufacturing process. This study focuses on scheduling the latter, namely HTF. Heat-treatment operation is one of the most important stages of steel casting industries. It determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. In general, different types of castings have to undergo more than one type of heat-treatment operations, where the total heat-treatment processing times change. To have a better control, castings are primarily classified into a number of job-families based on the alloy type such as low-alloy castings and high alloy castings. For technical reasons such as type of alloy, temperature level and the expected combination of heat-treatment operations, the castings from different families can not be processed together in the same batch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smart everyday objects could support the wellbeing, independent living and social connectedness of ageing people, but their successful adoption depends upon them fitting with their skills, values and goals. Many technologies fail in this respect. Our work is aimed at designs that engage older people by building on their individual affective attachment to habituated objects and leveraging, from a participatory design perspective, the creative process through which people continuously adapt their homes and tools to their own lifestyle. We contribute a novel analytic framework based on an analysis of related research on appropriation and habituated objects. It identifies steps in appropriation from inspection to performance and habituation. We test this framework with the preliminary testing of an augmented habituated object, a messaging kettle. While only used in one home so far, its daily use has provoked many thoughts, scenarios and projections about use by friends, both practical, utopian and dystopian.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrothermal reactions between uranium salts and arsenic pentoxide in the presence of two different amines yielded six new uranium arsenate phases exhibiting open-framework structures, ethylenediamine (en): [C2N2H9]-[(UO2)(ASO(4))] I; [C2N2H10][(UO2)F(HASO(4))]2 center dot 4H(2)O, II; [C2N2H9][U2F5(HASO(4))(2)], III; [C2N2H9][UF2(ASO(4))], IV; diethylenetriamine (DETA), [C4N3H16][U2F3(ASO(4))(2)(HAsO4)] V; and [C4N3H16][U2F6(AsO4)(HAsO4)], VI. The structures were determined using single crystal studies, which revealed two- (I, II, V) and three-dimensional (III, IV, VI) structures for the uranium arsenates. The uranium atom, in these compounds, exhibits considerable variations in the coordination (6 to 9) that appears to have some correlation with the synthetic conditions. The water molecules in [C2N2H10][(UO2)F(HAsO4)](2 center dot)4H(2)O, II, could be reversibly removed, and the dehydrated phase, [C2N2H10][(UO2)F(HAsO4)](2), IIa, was also characterized using single crystal studies. The observation of many mineralogical structures in the present compounds suggests that the hydrothermal method could successfully replicate the geothermal conditions. As part of this study, we have observed autunite, Ca[(UO2)(PO4)](2)(H2O)(11), metavauxite, [Fe(H2O)(6)][Al(OH)(H2O)(PO4)](2), finarite, PbCU(SO4)(OH)(2), and tancoite, LiNa2H[Al(PO4)(2)(OH)], structures. The repeated observation of the secondary building unit, SBU-4, in many of the uranium arsenate structures suggests that these are viable building units. Optical studies on the uranium arsenate compound, [C4N3H16][U2F6(AsO4)(HASO(4))), VI, containing uranium in the +4 oxidation state indicates a blue emission through an upconversion process. The compound also exhibits antiferromagnetic behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Governance has been one of the most popular buzzwords in recent political science. As with any term shared by numerous fields of research, as well as everyday language, governance is encumbered by a jungle of definitions and applications. This work elaborates on the concept of network governance. Network governance refers to complex policy-making situations, where a variety of public and private actors collaborate in order to produce and define policy. Governance is processes of autonomous, self-organizing networks of organizations exchanging information and deliberating. Network governance is a theoretical concept that corresponds to an empirical phenomenon. Often, this phenomenon is used to descirbe a historical development: governance is often used to describe changes in political processes of Western societies since the 1980s. In this work, empirical governance networks are used as an organizing framework, and the concepts of autonomy, self-organization and network structure are developed as tools for empirical analysis of any complex decision-making process. This work develops this framework and explores the governance networks in the case of environmental policy-making in the City of Helsinki, Finland. The crafting of a local ecological sustainability programme required support and knowledge from all sectors of administration, a number of entrepreneurs and companies and the inhabitants of Helsinki. The policy process relied explicitly on networking, with public and private actors collaborating to design policy instruments. Communication between individual organizations led to the development of network structures and patterns. This research analyses these patterns and their effects on policy choice, by applying the methods of social network analysis. A variety of social network analysis methods are used to uncover different features of the networked process. Links between individual network positions, network subgroup structures and macro-level network patterns are compared to the types of organizations involved and final policy instruments chosen. By using governance concepts to depict a policy process, the work aims to assess whether they contribute to models of policy-making. The conclusion is that the governance literature sheds light on events that would otherwise go unnoticed, or whose conceptualization would remain atheoretical. The framework of network governance should be in the toolkit of the policy analyst.