79 resultados para Explicit recasts
Resumo:
Julkaisussa: Cosmographia : Claudii Ptolomei viri Alexandrini cosmographie octavus et ultimus liber explicit
Resumo:
Julkaisussa: Cosmographia : Claudii Ptolomei viri Alexandrini cosmographie octavus et ultimus liber explicit
Resumo:
Julkaisussa: Cosmographia : Claudii Ptolomei viri Alexandrini cosmographie octavus et ultimus liber explicit
Resumo:
Julkaisussa: Cosmographia : Claudii Ptolomei viri Alexandrini cosmographie octavus et ultimus liber explicit
Resumo:
Speed, uncertainty and complexity are increasing in the business world all the time. When knowledge and skills become quickly irrelevant, new challenges are set for information technology (IT) education. Meta-learning skills – learning how to learn rapidly - and innovation skills have become more essential than single technologies or other specific issues. The drastic changes in the information and communications technology (ICT) sector have caused a need to reconsider how IT Bachelor education in Universities of Applied Sciences should be organized and employed to cope with the change. The objective of the study was to evaluate how a new approach to IT Bachelor education, the ICT entrepreneurship study path (ICT-ESP) fits IT Bachelor education in a Finnish University of Applied Sciences. This kind of educational arrangement has not been employed elsewhere in the context of IT Bachelor education. The study presents the results of a four-year period during which IT Bachelor education was renewed in a Finnish University of Applied Sciences. The learning environment was organized into an ICT-ESP based on Nonaka’s knowledge theory and Kolb’s experiental learning. The IT students who studied in the ICT-ESP established a cooperative and learned ICT by running their cooperative at the University of Applied Sciences. The students (called team entrepreneurs) studied by reading theory in books and other sources of explicit information, doing projects for their customers, and reflecting in training sessions on what was learnt by doing and by studying the literature. Action research was used as the research strategy in this study. Empirical data was collected via theme-based interviews, direct observation, and participative observation. Grounded theory method was utilized in the data analysis and the theoretical sampling was used to guide the data collection. The context of the University of Applied Sciences provided a good basis for fostering team entrepreneurship. However, the results showed that the employment of the ICT-ESP did not fit into the IT Bachelor education well enough. The ICT-ESP was cognitively too tough for the team entrepreneurs because they had two different set of rules to follow in their studies. The conventional courses engaged lot of energy which should have been spent for professional development in the ICT-ESP. The amount of competencies needed in the ICT-ESP for professional development was greater than those needed for any other ways of studying. The team entrepreneurs needed to develop skills in ICT, leadership and self-leadership, team development and entrepreneurship skills. The entrepreneurship skills included skills on marketing and sales, brand development, productization, and business administration. Considering the three-year time the team entrepreneurs spent in the ICT-ESP, the challenges were remarkable. Changes to the organization of IT Bachelor education are also suggested in the study. At first, it should be admitted that the ICT-ESP produces IT Bachelors with a different set of competencies compared to the conventional way of educating IT Bachelors. Secondly, the number of courses on general topics in mathematics, physics, and languages for team entrepreneurs studying in the ICTESP should be reconsidered and the conventional course-based teaching of the topics should be reorganized to support the team coaching process of the team entrepreneurs with their practiceoriented projects. Third, the upcoming team entrepreneurs should be equipped with relevant information about the ICT-ESP and what it would require in practice to study as a team entrepreneur. Finally, the upcoming team entrepreneurs should be carefully selected before they start in the ICT-ESP to have a possibility to eliminate solo players and those who have a too romantic view of being a team entrepreneur. The results gained in the study provided answers to the original research questions and the objectives of the study were met. Even though the IT degree programme was terminated during the research process, the amount of qualitative data gathered made it possible to justify the interpretations done.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
Julkaisussa: Cosmographia : Claudii Ptolomei viri Alexandrini cosmographie octavus et ultimus liber explicit
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
Increasing demand and shortage of energy resources and clean water due to the rapid development of industry, population growth and long term droughts have become an issue worldwide. As a result, global warming, long term droughts and pollution-related diseases are becoming more and more serious. The traditional technologies, such as precipitation, neutralization, sedimentation, filtration and waste immobilization, cannot prevent the pollution but restrict the waste chemicals only after the pollution emission. Meanwhile, most of these treatments cannot thoroughly degrade the contaminants and may generate toxic secondary pollutants into ecosystem. Heterogeneous photocatalysis as the innovative wastewater technology attracts many attention, because it is able to generate highly reactive transitory species for total degradation of organic compounds, water pathogens and disinfection by-products. Semiconductor as photocatalysts have demonstrated their efficiency in degrading a wide range of organics into readily biodegradable compounds, and eventually mineralized them to innocuous carbon dioxide and water. But, the efficiency of photocatalysis is limited, and hence, it is crucial issue to modify photocatalyst to enhance photocatalytic activity. In this thesis, first of all, two literature views are conducted. A survey of materials for photocatalysis has been carried out in order to summarize the properties and the applications of photocatalysts that have been developed in this field. Meanwhile, the strategy for the improvement of photocatalytic activity have been explicit discussed. Furthermore, all the raw material and chemicals used in this work have been listed as well as a specific experimental process and characterization method has been described. The synthesize methods of different photocatalysts have been depicted step by step. Among these cases, different modification strategies have been used to enhance the efficiency of photocatalyst on degradation of organic compounds (Methylene Blue or Phenol). For each case, photocatalytic experiments have been done to exhibit their photocatalytic activity.The photocatalytic experiments have been designed and its process have been explained and illustrated in detailed. Moreover, the experimental results have been shown and discussion. All the findings have been demonstrated in detail and discussed case by case. Eventually, the mechanisms on the improvement of photocatalytic activities have been clarified by characterization of samples and analysis of results. As a conclusion, the photocatalytic activities of selected semiconductors have been successfully enhanced via choosing appropriate strategy for the modification of photocatalysts.
Resumo:
The context of this study is corporate e-learning, with an explicit focus on how digital learning design can facilitate self-regulated learning (SRL). The field of e-learning is growing rapidly. An increasing number of corporations use digital technology and elearning for training their work force and customers. E-learning may offer economic benefits, as well as opportunities for interaction and communication that traditional teaching cannot provide. However, the evolving variety of digital learning contexts makes new demands on learners, requiring them to develop strategies to adapt and cope with novel learning tools. This study derives from the need to learn more about learning experiences in digital contexts in order to be able to design these properly for learning. The research question targets how the design of an e-learning course influences participants’ self-regulated learning actions and intentions. SRL involves learners’ ability to exercise agency in their learning. Micro-level SRL processes were targeted by exploring behaviour, cognition, and affect/motivation in relation to the design of the digital context. Two iterations of an e-learning course were tested on two groups of participants (N=17). However, the exploration of SRL extends beyond the educational design research perspective of comparing the effects of the changes to the course designs. The study was conducted in a laboratory with each participant individually. Multiple types of data were collected. However, the results presented in this thesis are based on screen observations (including eye tracking) and video-stimulated recall interviews. These data were integrated in order to achieve a broad perspective on SRL. The most essential change evident in the second course iteration was the addition of feedback during practice and the final test. Without feedback on actions there was an observable difference between those who were instruction-directed and those who were self-directed in manipulating the context and, thus, persisted whenever faced with problems. In the second course iteration, including the feedback, this kind of difference was not found. Feedback provided the tipping point for participants to regulate their learning by identifying their knowledge gaps and to explore the learning context in a targeted manner. Furthermore, the course content was consistently seen from a pragmatic perspective, which influenced the participants’ choice of actions, showing that real life relevance is an important need of corporate learners. This also relates to assessment and the consideration of its purpose in relation to participants’ work situation. The rigidity of the multiple choice questions, focusing on the memorisation of details, influenced the participants to adapt to an approach for surface learning. It also caused frustration in cases where the participants’ epistemic beliefs were incompatible with this kind of assessment style. Triggers of positive and negative emotions could be categorized into four levels: personal factors, instructional design of content, interface design of context, and technical solution. In summary, the key design choices for creating a positive learning experience involve feedback, flexibility, functionality, fun, and freedom. The design of the context impacts regulation of behaviour, cognition, as well as affect and motivation. The learners’ awareness of these areas of regulation in relation to learning in a specific context is their ability for design-based epistemic metareflection. I describe this metareflection as knowing how to manipulate the context behaviourally for maximum learning, being metacognitively aware of one’s learning process, and being aware of how emotions can be regulated to maintain volitional control of the learning situation. Attention needs to be paid to how the design of a digital learning context supports learners’ metareflective development as digital learners. Every digital context has its own affordances and constraints, which influence the possibilities for micro-level SRL processes. Empowering learners in developing their ability for design-based epistemic metareflection is, therefore, essential for building their digital literacy in relation to these affordances and constraints. It was evident that the implementation of e-learning in the workplace is not unproblematic and needs new ways of thinking about learning and how we create learning spaces. Digital contexts bring a new culture of learning that demands attitude change in how we value knowledge, measure it, define who owns it, and who creates it. Based on the results, I argue that digital solutions for corporate learning ought to be built as an integrated system that facilitates socio-cultural connectivism within the corporation. The focus needs to shift from designing static e-learning material to managing networks of social meaning negotiation as part of a holistic corporate learning ecology.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Quantum computation and quantum communication are two of the most promising future applications of quantum mechanics. Since the information carriers used in both of them are essentially open quantum systems it is necessary to understand both quantum information theory and the theory of open quantum systems in order to investigate realistic implementations of such quantum technologies. In this thesis we consider the theory of open quantum systems from a quantum information theory perspective. The thesis is divided into two parts: review of the literature and original research. In the review of literature we present some important definitions and known results of open quantum systems and quantum information theory. We present the definitions of trace distance, two channel capacities and superdense coding capacity and give a reasoning why they can be used to represent the transmission efficiency of a communication channel. We also show derivations of some properties useful to link completely positive and trace preserving maps to trace distance and channel capacities. With the help of these properties we construct three measures of non-Markovianity and explain why they detect non-Markovianity. In the original research part of the thesis we study the non-Markovian dynamics in an experimentally realized quantum optical set-up. For general one-qubit dephasing channels we calculate the explicit forms of the two channel capacities and the superdense coding capacity. For the general two-qubit dephasing channel with uncorrelated local noises we calculate the explicit forms of the quantum capacity and the mutual information of a four-letter encoding. By using the dynamics in the experimental implementation as a set of specific dephasing channels we also calculate and compare the measures in one- and two-qubit dephasing channels and study the options of manipulating the environment to achieve revivals and higher transmission rates in superdense coding protocol with dephasing noise. Kvanttilaskenta ja kvanttikommunikaatio ovat kaksi puhutuimmista tulevaisuuden kvanttimekaniikan käytännön sovelluksista. Koska molemmissa näistä informaatio koodataan systeemeihin, jotka ovat oleellisesti avoimia kvanttisysteemejä, sekä kvantti-informaatioteorian, että avointen kvanttisysteemien tuntemus on välttämätöntä. Tässä tutkielmassa käsittelemme avointen kvanttisysteemien teoriaa kvantti-informaatioteorian näkökulmasta. Tutkielma on jaettu kahteen osioon: kirjallisuuskatsaukseen ja omaan tutkimukseen. Kirjallisuuskatsauksessa esitämme joitakin avointen kvanttisysteemien ja kvantti-informaatioteorian tärkeitä määritelmiä ja tunnettuja tuloksia. Esitämme jälkietäisyyden, kahden kanavakapasiteetin ja superdense coding -kapasiteetin määritelmät ja esitämme perustelun sille, miksi niitä voidaan käyttää kuvaamaan kommunikointikanavan lähetystehokkuutta. Näytämme myös todistukset kahdelle ominaisuudelle, jotka liittävät täyspositiiviset ja jäljensäilyttävät kuvaukset jälkietäisyyteen ja kanavakapasiteetteihin. Näiden ominaisuuksien avulla konstruoimme kolme epä-Markovisuusmittaa ja perustelemme, miksi ne havaitsevat dynamiikan epä-Markovisuutta. Oman tutkimuksen osiossa tutkimme epä-Markovista dynamiikkaa kokeellisesti toteutetussa kvanttioptisessa mittausjärjestelyssä. Yleisen yhden qubitin dephasing-kanavan tapauksessa laskemme molempien kanavakapasiteettien ja superdense coding -kapasiteetin eksplisiittiset muodot. Yleisen kahden qubitin korreloimattomien ympäristöjen dephasing-kanavan tapauksessa laskemme yhteisen informaation lausekkeen nelikirjaimisessa koodauksessa ja kvanttikanavakapasiteetin. Käyttämällä kokeellisen mittajärjestelyn dynamiikkoja esimerkki dephasing-kanavina me myös laskemme konstruoitujen epä-Markovisuusmittojen arvot ja vertailemme niitä yksi- ja kaksi-qubitti-dephasing-kanavissa. Lisäksi käyttäen kokeellisia esimerkkikanavia tutkimme, kuinka ympäristöä manipuloimalla superdense coding –skeemassa voidaan saada yhteinen informaatio ajoittain kasvamaan tai saavuttaa kaikenkaikkiaan korkeampi lähetystehokkuus.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.
Resumo:
Arabic cookery book, dated tentatively to 12th or 13th century by the editors (Öhrnberg & Mroueh) on the basis of the script. Aro suggests (grounds not made explicit) that the manuscript came to the library from the collections of G. A. Wallin, but Öhrnberg & Mroueh simply describe it as being of unknown provenance.
Resumo:
Strenx® 960 MC is a direct quenched type of Ultra High Strength Steel (UHSS) with low carbon content. Although this material combines high strength and good ductility, it is highly sensitive towards fabrication processes. The presence of stress concentration due to structural discontinuity or notch will highlight the role of these fabrication effects on the deformation capacity of the material. Due to this, a series of tensile tests are done on both pure base material (BM) and when it has been subjected to Heat Input (HI) and Cold Forming (CF). The surface of the material was dressed by laser beam with a certain speed to study the effect of HI while the CF is done by bending the specimen to a certain angle prior to tensile test. The generated results illustrate the impact of these processes on the deformation capacity of the material, specially, when the material has HI experience due to welding or similar processes. In order to compare the results with those of numerical simulation, LS-DYNA explicit commercial package has been utilized. The generated results show an acceptable agreement between experimental and numerical simulation outcomes.