29 resultados para Desargues Configuration
Resumo:
Thin film applications have become increasingly important in our search for multifunctional and economically viable technological solutions of the future. Thin film coatings can be used for a multitude of purposes, ranging from a basic enhancement of aesthetic attributes to the addition of a complex surface functionality. Anything from electronic or optical properties, to an increased catalytic or biological activity, can be added or enhanced by the deposition of a thin film, with a thickness of only a few atomic layers at the best, on an already existing surface. Thin films offer both a means of saving in materials and the possibility for improving properties without a critical enlargement of devices. Nanocluster deposition is a promising new method for the growth of structured thin films. Nanoclusters are small aggregates of atoms or molecules, ranging in sizes from only a few nanometers up to several hundreds of nanometers in diameter. Due to their large surface to volume ratio, and the confinement of atoms and electrons in all three dimensions, nanoclusters exhibit a wide variety of exotic properties that differ notably from those of both single atoms and bulk materials. Nanoclusters are a completely new type of building block for thin film deposition. As preformed entities, clusters provide a new means of tailoring the properties of thin films before their growth, simply by changing the size or composition of the clusters that are to be deposited. Contrary to contemporary methods of thin film growth, which mainly rely on the deposition of single atoms, cluster deposition also allows for a more precise assembly of thin films, as the configuration of single atoms with respect to each other is already predetermined in clusters. Nanocluster deposition offers a possibility for the coating of virtually any material with a nanostructured thin film, and therein the enhancement of already existing physical or chemical properties, or the addition of some exciting new feature. A clearer understanding of cluster-surface interactions, and the growth of thin films by cluster deposition, must, however, be achieved, if clusters are to be successfully used in thin film technologies. Using a combination of experimental techniques and molecular dynamics simulations, both the deposition of nanoclusters, and the growth and modification of cluster-assembled thin films, are studied in this thesis. Emphasis is laid on an understanding of the interaction between metal clusters and surfaces, and therein the behaviour of these clusters during deposition and thin film growth. The behaviour of single metal clusters, as they impact on clean metal surfaces, is analysed in detail, from which it is shown that there exists a cluster size and deposition energy dependent limit, below which epitaxial alignment occurs. If larger clusters are deposited at low energies, or cluster-surface interactions are weaker, non-epitaxial deposition will take place, resulting in the formation of nanocrystalline structures. The effect of cluster size and deposition energy on the morphology of cluster-assembled thin films is also determined, from which it is shown that nanocrystalline cluster-assembled films will be porous. Modification of these thin films, with the purpose of enhancing their mechanical properties and durability, without destroying their nanostructure, is presented. Irradiation with heavy ions is introduced as a feasible method for increasing the density, and therein the mechanical stability, of cluster-assembled thin films, without critically destroying their nanocrystalline properties. The results of this thesis demonstrate that nanocluster deposition is a suitable technique for the growth of nanostructured thin films. The interactions between nanoclusters and their supporting surfaces must, however, be carefully considered, if a controlled growth of cluster-assembled thin films, with precisely tailored properties, is to be achieved.
Resumo:
The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.
Resumo:
This study analyses personal relationships linking research to sociological theory on the questions of the social bond and on the self as social. From the viewpoint of disruptive life events and experiences, such as loss, divorce and illness, it aims at understanding how selves are bound to their significant others as those specific people ‘close or otherwise important’ to them. Who form the configurations of significant others? How do different bonds respond in disruptions and how do relational processes unfold? How is the embeddedness of selves manifested in the processes of bonding, on the one hand, and in the relational formation of the self, on the other? The bonds are analyzed from an anti-categorical viewpoint based on personal citations of significance as opposed to given relationship categories, such as ‘family’ or ‘friendship’ – the two kinds of relationships that in fact are most frequently significant. The study draws from analysis of the personal narratives of 37 Finnish women and men (in all 80 interviews) and their entire configurations of those specific people who they cite as ‘close or otherwise important’. The analysis stresses the subjective experiences, while also investigating the actualized relational processes and configurations of all personal relationships with certain relationship histories embedded in micro-level structures. The research is based on four empirical sub-studies of personal relationships and a summary discussing the questions of the self and social bond. Discussion draws from G. H. Mead, C. Cooley, N. Elias, T. Scheff, G. Simmel and the contributors of ‘relational sociology’. Sub-studies analyse bonds to others from the viewpoint of biographical disruption and re-configuration of significant others, estranged family bonds, peer support and the formation of the most intimate relationships into exclusive and inclusive configurations. All analyses examine the dialectics of the social and the personal, asking how different structuring mechanisms and personal experiences and negotiations together contribute to the unfolding of the bonds. The summary elaborates personal relationships as social bonds embedded in wider webs of interdependent people and social settings that are laden with cultural expectations. Regarding the question of the relational self, the study proposes both bonding and individuality as significant. They are seen as interdependent phases of the relationality of the self. Bonding anchors the self to its significant relationships, in which individuality is manifested, for example, in contrasting and differentiating dynamics, but also in active attempts to connect with others. Individuality is not a fixed quality of the self, but a fluid and interdependent phase of the relational self. More specifically, it appears in three formats in the flux of relational processes: as a sense of unique self (via cultivation of subjective experiences), as agency and as (a search for) relative autonomy. The study includes an epilogue addressing the ambivalence between the social expectation of individuality in society and the bonded reality of selves.
Resumo:
The research focuses on client plan in the field of health care and social work on families with children. The purpose of the plan is to create objectives for helping the client and to assist in coordinating the ever-increasing multi-professional work. In general, the plan is understood in terms of assignments and as a contract specifying what to do in client cases. Taking this into consideration, the plan is outsourced into a written document. Instead of understanding the plan as a tool that stabilizes the objectives of action, documents it and facilitates evaluation, the client plan is conceptualized in this study as a practice. This kind of practice mediates client work as being itself also a process of action that focuses on an object whose gradual emergence and definition is the central question in multi-professional collaboration with a client. The plan is examined empirically in a non-stabilized state which leads to the research methodology being based on the dynamics between stabilization and emerging, non-stabilized entities the co-creation and formulation of practice and context. The theoretical approach of the research is the micro analytic approach of activity theory (Engeström R. 1999b). Grounding on this, the research develops a method of qualitative analysis which follows an emerging object with multiple voices. The research data is composed of the videotaped sessions from client meetings with three families, the interviews with the client and the workers as well as client documents that are used to follow up on client processes for at least one year. The research questions are as follows: 1) How is the client plan constructed between the client and different professional agents? 2) How are meanings constructed in a client-centred plan? 3) What are the elements of client-employee relationships that support the co-configuration necessitated by the changes in the client s everyday life? The study shows that the setting of objectives were limited by the palette of institutional services, which caused that the clients interpretations and acts of giving meaning to the kinds of help that was required were left out of the plan. Conceptually, the distinctions between client-centred and client-specific ways of working as well as an action-based working method are addressed. Central to this action-based approach is construing the everyday life of the client, recognizing different meanings and analyzing them together with the client as well as focusing attention on developing the prerequisites for social agency of the clients. The research portrays the elements for creating an action-based client plan. Key words: client plan, user perspective, multi-voiced meaning, multi-professional social work with children and families, agency
Resumo:
A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.
Resumo:
A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.
Resumo:
Nucleation is the first step in a phase transition where small nuclei of the new phase start appearing in the metastable old phase, such as the appearance of small liquid clusters in a supersaturated vapor. Nucleation is important in various industrial and natural processes, including atmospheric new particle formation: between 20 % to 80 % of atmospheric particle concentration is due to nucleation. These atmospheric aerosol particles have a significant effect both on climate and human health. Different simulation methods are often applied when studying things that are difficult or even impossible to measure, or when trying to distinguish between the merits of various theoretical approaches. Such simulation methods include, among others, molecular dynamics and Monte Carlo simulations. In this work molecular dynamics simulations of the homogeneous nucleation of Lennard-Jones argon have been performed. Homogeneous means that the nucleation does not occur on a pre-existing surface. The simulations include runs where the starting configuration is a supersaturated vapor and the nucleation event is observed during the simulation (direct simulations), as well as simulations of a cluster in equilibrium with a surrounding vapor (indirect simulations). The latter type are a necessity when the conditions prevent the occurrence of a nucleation event in a reasonable timeframe in the direct simulations. The effect of various temperature control schemes on the nucleation rate (the rate of appearance of clusters that are equally able to grow to macroscopic sizes and to evaporate) was studied and found to be relatively small. The method to extract the nucleation rate was also found to be of minor importance. The cluster sizes from direct and indirect simulations were used in conjunction with the nucleation theorem to calculate formation free energies for the clusters in the indirect simulations. The results agreed with density functional theory, but were higher than values from Monte Carlo simulations. The formation energies were also used to calculate surface tension for the clusters. The sizes of the clusters in the direct and indirect simulations were compared, showing that the direct simulation clusters have more atoms between the liquid-like core of the cluster and the surrounding vapor. Finally, the performance of various nucleation theories in predicting simulated nucleation rates was investigated, and the results among other things highlighted once again the inadequacy of the classical nucleation theory that is commonly employed in nucleation studies.
Resumo:
The productivity of a process is related to how effectively input resources are transformed into value for customers. For the needs of manufacturers of physical products there are widely used productivity concepts and measurements instruments. However, in service processes the underlying assumptions of these concepts and models do not hold. For example, manufacturing-based productivity models assume that an altered configuration of input resources in the production process does not lead to quality changes in outputs (the constant-quality assumption). However, in a service context changes in the production resources and productions systems do affect the perceived quality of services. Therefore, using manufacturing-oriented productivity models in service contexts are likely to give managers wrong directions for action. Research into the productivity of services is still scarce, because of the lack of viable models. The purpose of the present article is to analyse the requirements for the development of a productivity concept for service operations. Based on the analysis, a service productivity model is developed. According to this model, service productivity is a function of 1) how effectively input resources into the service (production) process are transformed to outputs in the form of services (internal or cost efficiency), 2) how well the quality of the service process and its outcome is perceived (external or revenue efficiency), and 3) how effectively the capacity of the service process is utilised (capacity efficiency). In addition, directions for developing measurement models for service productivity are discussed.
Resumo:
Views on industrial service have conceptually progressed from the output of the provider’s production process to the result of an interaction process in which the customer also is involved. Although there are attempts to be customer-oriented, especially when the focus is on solutions, an industrial company’s offering combining goods and services is inherently seller-oriented. There is, however, a need to go beyond the current literature and company practices. We propose that what is needed is a genuinely customer-based parallel concept to offering that takes the customer’s view and put forward a new concept labelled customer needing. A needing is based on the customer’s mental model of their business and strategies which will affect priorities, decisions, and actions. A needing can be modelled as a configuration of three dimensions containing six functions that create realised value for the customer. These dimensions and functions can be used to describe needings which represent starting points for sellers’ creation of successful offerings. When offerings match needings over time the seller should have the potential to form and sustain successful buyer relationships.
Resumo:
There is an urgent interest in marketing to move away from neo-classical value definitions suggesting that value creation is a process of exchanging goods for money. In the present paper, value creation is conceptualized as an integration of two distinct, yet closely coupled processes. First, actors co-create what this paper calls an underlying basis of value. This is done by interactively re-configuring resources. By relating and combining resources, activity sets, and risks across actor boundaries in novel ways actors create joint productivity gains – a concept very similar to density (Normann, 2001). Second, actors engage in a process of signification and evaluation. Signification implies co-constructing the meaning and worth of joint productivity gains co-created through interactive resource re-configuration, as well as sharing those gains through a pricing mechanism as value to involved actors. The conceptual framework highlights an all-important dynamics associated with ´value creation´ and ´value´ - a dynamics the paper claims has eluded past marketing research. The paper argues that the framework presented here is appropriate for the interactive service perspective, where value and value creation are not objectively given, but depend on the power of involved actors´ socially constructed frames to mobilize resources across actor boundaries in ways that ´enhance system well-being´ (Vargo et al., 2008). The paper contributes to research on Service Logic, Service-Dominant Logic, and Service Science.
Resumo:
Transposed to media like film, drama, opera, music, and the visual arts, “narrative” is no longer characterized by either temporality or an act of telling, both required by earlier narratological theories. Transposed to other disciplines, “narrative” is often a substitute for “assumption”, “hypothesis”, a disguised ideological stance, a cognitive scheme, and even life itself. The potential for broadening the concept lay dormant in narratology, both in the double use of “narrative” for the medium-free fabula and for the medium-bound sjuzet, and in changing interpretations of “event”. Some advantages of the broad use of “narrative” are an evocation of commonalities among media and disciplines, an invitation to re-think the term within the originating discipline, a constructivist challenge to positivistic and foundational views, an emphasis on a plurality of competing “truths”, and an empowerment of minority voices. Conversely, disadvantages of the broad use are an illusion of sameness whenever the term is used and the obliteration of specificity. In a Wittgensteinian spirit, the essay agrees that concepts of narrative are mutually related by “family resemblance”, but wishes to probe the resemblances further. It thus postulates two necessary features: double temporality and a transmitting (or mediating) agency, and an additional cluster of variable optional characteristics. When the necessary features are not dominant, the configuration may have “narrative elements” but is not “a narrative”.
Resumo:
This thesis is primarily concerned with the enzyme- catalysed synthesis of sulfoxides using reductase and dioxygenase enzymes. Chapter 1 provides an introduction to the topic of redox chemistry with particular emphasis on the application of reductase and dioxygenase enzymes in organosulfur chemistry. Earlier literature methods for the production of enantiopure sulfoxides are reviewed. A brief discussion of the methods used for the determination of enantiomeric excess and absolute configuration is provided. Chapter 2 contains results obtained using a range of whole-cell bacteria each using a dimethyl sulfoxide reductase enzyme. The synthesis of a series of racemic sulfoxides and the development of appropriate CSPHPLC analytical methods is discussed. Kinetic resolutions of a series of sulfoxides have been achieved. Chapter 3 contains a presentation of results using dioxygenase enzymes as biocatalysts for the asymmetric sulfoxidation of dialkyl sulfoxides including thioacetal sulfoxides. A new range of monosulfoxides, cis-dihydrodiols and cis- dihydrodiol sulfoxides have been isolated in enantiopure form. Chapter 4 is focussed on the application of chiral sulfoxides in synthesis. A new chemoenzymatic route to diol sulfoxide enantiomers and the derived enantiopure phenols and catechols is discussed. The application of chemically synthesised sulfoxide enantiomers in the production of hydroxy sulfoxides is reported. Chapter 5 provides a full experimental section where the synthesis of sulfides and racemic sulfoxides is included. The methods used in the isolation and characterisation of bioproducts from the biotransformation are discussed and full experimental details given.
Resumo:
This thesis presents ab initio studies of two kinds of physical systems, quantum dots and bosons, using two program packages of which the bosonic one has mainly been developed by the author. The implemented models, \emph{i.e.}, configuration interaction (CI) and coupled cluster (CC) take the correlated motion of the particles into account, and provide a hierarchy of computational schemes, on top of which the exact solution, within the limit of the single-particle basis set, is obtained. The theory underlying the models is presented in some detail, in order to provide insight into the approximations made and the circumstances under which they hold. Some of the computational methods are also highlighted. In the final sections the results are summarized. The CI and CC calculations on multiexciton complexes in self-assembled semiconductor quantum dots are presented and compared, along with radiative and non-radiative transition rates. Full CI calculations on quantum rings and double quantum rings are also presented. In the latter case, experimental and theoretical results from the literature are re-examined and an alternative explanation for the reported photoluminescence spectra is found. The boson program is first applied on a fictitious model system consisting of bosonic electrons in a central Coulomb field for which CI at the singles and doubles level is found to account for almost all of the correlation energy. Finally, the boson program is employed to study Bose-Einstein condensates confined in different anisotropic trap potentials. The effects of the anisotropy on the relative correlation energy is examined, as well as the effect of varying the interaction potential.}
Resumo:
This dissertation is a descriptive grammar of Ternate Chabacano, a Spanish-lexifier Creole spoken by 3.000 people in the town of Ternate, Philippines. The dissertation offers an analysis of the phonological, morphological, and syntactic system of the language. It includes an overview of the historical background, the current situation of the speech community and a collection of annotated texts. Ternate Chabacano shares many characteristics with its main adstrate language Tagalog as well as the dialectal varieties of Spanish. At present, English also exerts an influence, nevertheless mainly affecting its lexicon. The description offered is based on fieldwork conducted in Ternate. Spoken language collected through thematic interviews forms the main type of the material analysed. Information regarding the informants and text types is included in the examples. Ternate Chabacano has a five-vowel system and 17 consonant phonemes. The morphology of the language is largely isolating. Clitics are used extensively for expressing adverbial relations. The verbal system is based on the preverbal markers that express the category of tense, modality and aspect, among which aspect is the main dimension. Complex predicates and verbal chains are used in order to further distinguish aspect and modality, as well as changes of voice and valency. Intransitive verbs express motion, states, and reflexive actions, even though the majority of verbs can occur in both intransitive and transitive clauses. Ternate Chabacano is a nominative-accusative type language but the typological configuration of the Philippine languages influences the marking of its constituents. A case in point is constituted by the nominal determination system. The basic constituent order in a clause is VSO. Equative and attibutive clauses are formed by juxtaposition while the locative clauses feature a copula. Indefinite terms are expressed through existential constructions. The negation of existential clauses differs from standard negation but both are intensified in the same way. In spoken discourse, tag-questions are common. Pragmatic elements and social formulas reflect largely the corresponding Tagalog expressions. Coordination and subordination occur typically without overt markers but a variety of markers exists for expressing different relations, especially those made explicit by adverbial clauses. Verbal chains form a continuum from serial verbs to complementation and ultimately to coordination.