28 resultados para Distance convex simple graphs


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Den snart 200 år gamla vetenskapsgrenen organisk synteskemi har starkt bidragit till moderna samhällens välfärd. Ett av flaggskeppen för den organiska synteskemin är utvecklingen och produktionen av nya läkemedel och speciellt de aktiva substanserna däri. Därmed är det viktigt att utveckla nya syntesmetoder, som kan tillämpas vid framställningen av farmaceutiskt relevanta målstrukturer. I detta sammanhang är den ultimata målsättningen dock inte endast en lyckad syntes av målmolekylen, utan det är allt viktigare att utveckla syntesrutter som uppfyller kriterierna för den hållbara utvecklingen. Ett av de centralaste verktygen som en organisk kemist har till förfogande i detta sammanhang är katalys, eller mera specifikt möjligheten att tillämpa olika katalytiska reaktioner vid framställning av komplexa målstrukturer. De motsvarande industriella processerna karakteriseras av hög effektivitet och minimerad avfallsproduktion, vilket naturligtvis gynnar den kemiska industrin samtidigt som de negativa miljöeffekterna minskas avsevärt. I denna doktorsavhandling har nya syntesrutter för produktion av finkemikalier med farmaceutisk relevans utvecklats genom att kombinera förhållandevis enkla transformationer till nya reaktionssekvenser. Alla reaktionssekvenser som diskuteras i denna avhandling påbörjades med en metallförmedlad allylering av utvalda aldehyder eller aldiminer. De erhållna produkterna innehållende en kol-koldubbelbindning med en närliggande hydroxyl- eller aminogrupp modifierades sedan vidare genom att tillämpa välkända katalytiska reaktioner. Alla syntetiserade molekyler som presenteras i denna avhandling karakteriseras som finkemikalier med hög potential vid farmaceutiska tillämpningar. Utöver detta tillämpades en mängd olika katalytiska reaktioner framgångsrikt vid syntes av dessa molekyler, vilket i sin tur förstärker betydelsen för de katalytiska verktygen i organiska kemins verktygslåda.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lanthanides represent the chemical elements from lanthanum to lutetium. They intrinsically exhibit some very exciting photophysical properties, which can be further enhanced by incorporating the lanthanide ion into organic or inorganic sensitizing structures. A very popular approach is to conjugate the lanthanide ion to an organic chromophore structure forming lanthanide chelates. Another approach, which has quickly gained interest, is to incorporate the lanthanide ions into nanoparticle structures, thus attaining improved specific activity and binding capacity. The lanthanide-based reporters usually express strong luminescence emission, multiple narrow emission lines covering a wide wavelength range, and exceptionally long excited state lifetimes enabling timeresolved detection. Because of these properties, the lanthanide-based reporters have found widespread applications in various fields of life. This study focuses on the field of bioanalytical applications. The aim of the study was to demonstrate the utility of different lanthanide-based reporters in homogeneous Förster resonance energy transfer (FRET)-based bioaffinity assays. Several different model assays were constructed. One was a competitive bioaffinity assay that utilized energy transfer from lanthanide chelate donors to fluorescent protein acceptors. In addition to the conventional FRET phenomenon, a recently discovered non-overlapping FRET (nFRET) phenomenon was demonstrated for the first time for fluorescent proteins. The lack of spectral overlap in the nFRET mechanism provides sensitivity and versatility to energy transfer-based assays. The distance and temperature dependence of these phenomena were further studied in a DNA-hybridization assay. The distance dependence of nFRET deviated from that of FRET, and unlike FRET, nFRET demonstrated clear temperature dependence. Based on these results, a possible excitation mechanism operating in nFRET was proposed. In the study, two enzyme activity assays for caspase-3 were also constructed. One of these was a fluorescence quenching-based enzyme activity assay that utilized novel inorganic particulate reporters called upconverting phosphors (UCPs) as donors. The use of UCPs enabled the construction of a simple, rather inexpensive, and easily automated assay format that had a high throughput rate. The other enzyme activity assay took advantage of another novel reporter class, the lanthanidebinding peptides (LBPs). In this assay, energy was transferred from a LBP to a green fluorescent protein (GFP). Using the LBPs it was possible to avoid the rather laborious, often poorly repeatable, and randomly positioned chemical labeling. In most of the constructed assays, time-resolved detection was used to eliminate the interfering background signal caused by autofluorescence. The improved signal-to-background ratios resulted in increased assay sensitivity, often unobtainable in homogeneous assay formats using conventional organic fluorophores. The anti-Stokes luminescence of the UCPs, however, enabled the elimination of autofluorescence even without time-gating, thus simplifying the instrument setup. Together, the studied reporters and assay formats pave the way for increasingly sensitive, simple, and easily automated bioanalytical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the influence of cultural distance on entrepreneurs’ negotiation behaviour. For this purpose, Turku was chosen as the unit of analysis due to the exponential demographic change experienced during the last two decades that has derived in a more diversified local environment. The research aim set for this study was to identify to what extent entrepreneurs face cultural distance, how cultural distance influences the entrepreneur’s negotiation behaviour and how can it be addressed in order to turn dissimilarities into opportunities. This study presented the relation and apparent dichotomy of cultural distance and global culture, including the component of diversity. The impact of cultural distance in the entrepreneurial mindset and its consequent effect in negotiation behaviour was presented too. Addressing questions about the way individuals perceive, behave and interact allowed the use of interviews for this qualitative research study. In the empirical part of this study it was found that negotiation behaviour differed in terms of how congenial entrepreneurs felt when managing cultural distance, encompassing their performance. It was also acknowledged that after time and effort, some of the personal traits were enhanced while others reduced, allowing for more flexibility and adaptation. Furthermore, depending on the level of trust and shared interests, entrepreneurs determined their attitudinal approach, being adaptive or reactive subject to situational aspects. Additionally, it was found that the acquisition of cultural savvy not necessarily conveyed to more creativity. This experiential learning capability led to the proposition of new ways of behaviour. Likewise, it was proposed that growing cultural intelligence bridge distances, reducing mistrusts and misunderstandings. The capability of building more collaborative relationships allows entrepreneurs to see cultural distance as a cultural perspective instead of as a threat. Therefore it was recommended to focus on proximity rather than distance to better identify and exploit untapped opportunities and better perform when negotiating in whichever cultural conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Många kvantitativa problem från vitt skilda områden kan beskrivas som optimeringsproblem. Ett mått på lösningens kvalitet bör optimeras samtidigt som vissa villkor på lösningen uppfylls. Kvalitetsmåttet kallas vanligen objektfunktion och kan beskriva kostnader (exempelvis produktion, logistik), potentialenergi (molekylmodellering, proteinveckning), risk (finans, försäkring) eller något annat relevant mått. I min doktorsavhandling diskuteras speciellt icke-linjär programmering, NLP, i ändliga dimensioner. Problem med enkel struktur, till exempel någon form av konvexitet, kan lösas effektivt. Tyvärr kan inte alla kvantitativa samband modelleras på ett konvext vis. Icke-konvexa problem kan angripas med heuristiska metoder, algoritmer som söker lösningar med hjälp av deterministiska eller stokastiska tumregler. Ibland fungerar det här väl, men heuristikerna kan sällan garantera kvaliteten på lösningen eller ens att en lösning påträffas. För vissa tillämpningar är det här oacceptabelt. Istället kan man tillämpa så kallad global optimering. Genom att successivt dela variabeldomänen i mindre delar och beräkna starkare gränser på det optimala värdet hittas en lösning inom feltoleransen. Den här metoden kallas branch-and-bound, ungefär dela-och-begränsa. För att ge undre gränser (vid minimering) approximeras problemet med enklare problem, till exempel konvexa, som kan lösas effektivt. I avhandlingen studeras tillvägagångssätt för att approximera differentierbara funktioner med konvexa underskattningar, speciellt den så kallade alphaBB-metoden. Denna metod adderar störningar av en viss form och garanterar konvexitet genom att sätta villkor på den perturberade Hessematrisen. Min forskning har lyft fram en naturlig utvidgning av de perturbationer som används i alphaBB. Nya metoder för att bestämma underskattningsparametrar har beskrivits och jämförts. I sammanfattningsdelen diskuteras global optimering ur bredare perspektiv på optimering och beräkningsalgoritmer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A subshift is a set of in nite one- or two-way sequences over a xed nite set, de ned by a set of forbidden patterns. In this thesis, we study subshifts in the topological setting, where the natural morphisms between them are ones de ned by a (spatially uniform) local rule. Endomorphisms of subshifts are called cellular automata, and we call the set of cellular automata on a subshift its endomorphism monoid. It is known that the set of all sequences (the full shift) allows cellular automata with complex dynamical and computational properties. We are interested in subshifts that do not support such cellular automata. In particular, we study countable subshifts, minimal subshifts and subshifts with additional universal algebraic structure that cellular automata need to respect, and investigate certain criteria of `simplicity' of the endomorphism monoid, for each of them. In the case of countable subshifts, we concentrate on countable so c shifts, that is, countable subshifts de ned by a nite state automaton. We develop some general tools for studying cellular automata on such subshifts, and show that nilpotency and periodicity of cellular automata are decidable properties, and positive expansivity is impossible. Nevertheless, we also prove various undecidability results, by simulating counter machines with cellular automata. We prove that minimal subshifts generated by primitive Pisot substitutions only support virtually cyclic automorphism groups, and give an example of a Toeplitz subshift whose automorphism group is not nitely generated. In the algebraic setting, we study the centralizers of CA, and group and lattice homomorphic CA. In particular, we obtain results about centralizers of symbol permutations and bipermutive CA, and their connections with group structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a framework for segmentation of clustered overlapping convex objects. The proposed approach is based on a three-step framework in which the tasks of seed point extraction, contour evidence extraction, and contour estimation are addressed. The state-of-art techniques for each step were studied and evaluated using synthetic and real microscopic image data. According to obtained evaluation results, a method combining the best performers in each step was presented. In the proposed method, Fast Radial Symmetry transform, edge-to-marker association algorithm and ellipse fitting are employed for seed point extraction, contour evidence extraction and contour estimation respectively. Using synthetic and real image data, the proposed method was evaluated and compared with two competing methods and the results showed a promising improvement over the competing methods, with high segmentation and size distribution estimation accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Torrefaction is the partial pyrolysis of wood characterised by thermal degradation of predominantly hemicellulose under inert atmosphere. Torrefaction can be likened to coffee roasting but with wood in place of beans. This relatively new process concept makes wood more like coal. Torrefaction has attracted interest because it potentially enables higher rates of co-firing in existing pulverised-coal power plants and hence greater net CO2 emission reductions. Academic and entrepreneurial interest in torrefaction has sky rocketed in the last decade. Research output has focused on the many aspects of torrefaction – from detailed chemical changes in feedstock to globally-optimised production and supply scenarios with which to sustain EU emission-cutting directives. However, despite its seemingly simple concept, torrefaction has retained a somewhat mysterious standing. Why hasn’t torrefied pellet production become fully commercialised? The question is one of feasibility. This thesis addresses this question. Herein, the feasibility of torrefaction in co-firing applications is approached from three directions. Firstly, the natural limitations imposed by the structure of wood are assessed. Secondly, the environmental impact of production and use of torrefied fuel is evaluated and thirdly, economic feasibility is assessed based on the state of the art of pellet making. The conclusions reached in these domains are as follows. Modification of wood’s chemical structure is limited by its naturally existing constituents. Consequently, key properties of wood with regards to its potential as a co-firing fuel have a finite range. The most ideal benefits gained from wood torrefaction cannot all be realised simultaneously in a single process or product. Although torrefaction at elevated pressure may enhance some properties of torrefied wood, high-energy torrefaction yields are achieved at the expense of other key properties such as heating value, grindability, equilibrium moisture content and the ability to pelletise torrefied wood. Moreover, pelletisation of even moderately torrefied fuels is challenging and achieving a standard level of pellet durability, as required by international standards, is not trivial. Despite a reduced moisture content, brief exposure of torrefied pellets to water from rainfall or emersion results in a high level of moisture retention. Based on the above findings, torrefied pellets are an optimised product. Assessment of energy and CO2-equivalent emission balance indicates that there is no environmental barrier to production and use of torrefied pellets in co-firing. A long product transport distance, however, is necessary in order for emission benefits to exceed those of conventional pellets. Substantial CO2 emission reductions appear possible with this fuel if laboratory milling results carry over to industrial scales for direct co-firing. From demonstrated state-of-the-art pellet properties, however, the economic feasibility of torrefied pellet production falls short of conventional pellets primarily due to the larger capital investment required for production. If the capital investment for torrefied pellet production can be reduced significantly or if the pellet-making issues can be resolved, the two production processes could be economically comparable. In this scenario, however, transatlantic shipping distances and a dry fuel are likely necessary for production to be viable. Based on demonstrated pellet properties to date, environmental aspects and production economics, it is concluded that torrefied pellets do not warrant investment at this time. However, from the presented results, the course of future research in this field is clear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hammaslääketieteessä käytetettävien komposiittien valonläpäisevyys vaihtelee. Samoin LED-valokovettimet eroavat toisistaan valotehonsa ja muotoilunsa perusteella. On yleisesti tiedossa, että valokovettimesta tulevan valon intensiteetti pinta-alayksikköä kohden heikkenee, kun kovettimen etäisyys kasvaa. Toisaalta ei ole tiedossa, miten valokovetettavan kohteen ja valokovettimen kärjen väliin sijoitettu materiaali tarkalleenottaen vaikuttaa valon intensiteettiin eri etäisyyksiä käytettäessä. Tämän tutkimuksen tarkoituksena on selvittää, miten valokovetettavan kohteen ja valokovettimen kärjen väliin asetettava etukäteen polymerisoitu materiaali vaikuttaa valon intensiteettiin eri etäisyyksillä. Tutkimus suoritettiin käyttämällä kahta eri valokovetinta. Jotta etäisyyden vaikutusta valotustehoon voitiin demonstroida, vaihdettiin kovettimen etäisyyttä sensorista 0,2,4,6,8,10mm välillä. Valotehot rekisteröitiin MARC resin calibrator -laitteella. Sensorin ja valokovettimen kärjen väliin asetettavat erilaiset komposiittilevyt olivat valmiiksi kovetettuja,1mm paksuisia, filleripitoisuuksiltaan neljää erilaista muovia. Valotehot rekisteröitiin jokaiselta etäisyydeltä komposiitin ollessa sensorin päällä. Rinnakkaisesti verrattiin myös etäisyyden vaikutusta valotehoon ilman esikovetettua materiaalia kovettimen kärjen ja valoa mittaavan sensorin välissä. Vertailun suorittamiseksi laskettiin intensiteettisuhdeluku muovillisen ja muovittoman arvon välillä aina tietyllä etäisyydellä Valokovettimen kärjen etäisyyden kasvattaminen sensorista (eli valokovetettavasta kohteesta) odotusten mukaisesti pienensi valotehoa. Laittamalla sensorin ja kovettimen väliin komposiittilevy, valoteho pieneni odotetusti vielä enemmän. Tutkittaessa intensiteettisuhdetta (valoteho muovin kanssa : valoteho ilman muovia) kuitenkin huomattiin, että 4-6mm:n kohdalla suhdeluku oli suurempi kuin 0,2,8 ja 10mm kohdalla. Johtopäätöksenä oli, että suurin mahdollinen valokovetusteho saavutetan laittamalla kovetuskärki mahdollisimman lähelle kohdetta. Jos valokovetettavan kohteen ja valokovettimen kärjen välissä oli kiinteä komposiittipalanen, suurin mahdollinen valokovetusteho kohteeseen saavutetaan edelleen laittamalla kovetuskärki kiinni muoviin. Jos etäisyyttä muovin pinnasta sen sijaan kasvatettiin, valokovetusteho ei laskenutkaan niin nopeasti kuin oli odotettu. Tämä voi liittyä siihen, että tehokkaan valokeilan halkaisijan koko on suurempi verrattuna komposiitin sekä sensorin halkaisian kokoon. Toiseksi on arvioitu, että resiinikomposiitin täyteaineet voisivat fokusoida läpi kulkevaa valoa sensoriin. Se, pitääkö tämä ilmiö paikkansa, vaatii kuitenkin enemmän tutkimusta