907 resultados para Building Blocks for Creative Practice


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intramolecular C–H insertion reactions of α-diazocarbonyl compounds typically proceed with preferential five-membered ring formation. However, the presence of a heteroatom such as nitrogen can activate an adjacent C–H site toward insertion resulting in regiocontrol issues. In the case of α-diazoacetamide derivatives, both β- and γ-lactam products are possible owing to this activating effect. Both β- and γ-lactam products are powerful synthetic building blocks in the area of organic synthesis, as well as a common scaffold in a range of natural and pharmaceutical products and therefore C–H insertion reactions to form such compounds are attractive processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is the second in a series of studies working towards constructing a realistic, evolving, non-potential coronal model for the solar magnetic carpet. In the present study, the interaction of two magnetic elements is considered. Our objectives are to study magnetic energy build-up, storage and dissipation as a result of emergence, cancellation, and flyby of these magnetic elements. In the future these interactions will be the basic building blocks of more complicated simulations involving hundreds of elements. Each interaction is simulated in the presence of an overlying uniform magnetic field, which lies at various orientations with respect to the evolving magnetic elements. For these three small-scale interactions, the free energy stored in the field at the end of the simulation ranges from 0.2 – 2.1×1026 ergs, whilst the total energy dissipated ranges from 1.3 – 6.3×1026 ergs. For all cases, a stronger overlying field results in higher energy storage and dissipation. For the cancellation and emergence simulations, motion perpendicular to the overlying field results in the highest values. For the flyby simulations, motion parallel to the overlying field gives the highest values. In all cases, the free energy built up is sufficient to explain small-scale phenomena such as X-ray bright points or nanoflares. In addition, if scaled for the correct number of magnetic elements for the volume considered, the energy continually dissipated provides a significant fraction of the quiet Sun coronal heating budget.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the aim of producing materials with enhanced optical and photocatalytic properties, titanate nanotubes (TNTs) modified by cobalt doping (Co-TNT) and by Na+ -> Co ion-exchange (TNT/Co) were successfully prepared by a hydrothermal method. The influence of the doping level and of the cobalt position in the TNT crystalline structure was studied. Although no perceptible influence of the cobalt ion position on the morphology of the prepared titanate nanotubes was observed, the optical behaviour of the cobalt modified samples is clearly dependent on the cobalt ions either substituting the Ti4+ ions in the TiO6 octahedra building blocks of the TNT structure (doped samples) or replacing the Na+ ions between the TiO6 interlayers (ion-exchange samples). The catalytic ability of these materials on pollutant photodegradation was investigated. First, the evaluation of hydroxyl radical formation using the terephthalic acid as a probe was performed. Afterwards, phenol, naphthol yellow S and brilliant green were used as model pollutants. Anticipating real world situations, photocatalytic experiments were performed using solutions combining these pollutants. The results show that the Co modified TNT materials (Co-TNT and TNT/Co) are good catalysts, the photocatalytic performance being dependent on the Co/Ti ratio and on the structural metal location. The Co(1%)-TNT doped sample was the best photocatalyst for all the degradation processes studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stratigraphic architecture of deep sea depositional systems has been discussed in detail. Some examples in Ischia and Stromboli volcanic islands (Southern Tyrrhenian sea, Italy) are here shown and discussed. The submarine slope and base of slope depositional systems represent a major component of marine and lacustrine basin fills, constituting primary targets for hydrocarbon exploration and development. The slope systems are characterized by seven seismic facies building blocks, including the turbiditic channel fills, the turbidite lobes, the sheet turbidites, the slide, slump and debris flow sheets, lobes and tongues, the fine-grained turbidite fills and sheets, the contourite drifts and finally, the hemipelagic drapes and fills. Sparker profiles offshore Ischia are presented. New seismo-stratigraphic evidence on buried volcanic structures and overlying Quaternary deposits of the eastern offshore of the Ischia Island are here discussed to highlight the implications on marine geophysics and volcanology. Regional seismic sections in the Ischia offshore across buried volcanic structures and debris avalanche and debris flow deposits are here presented and discussed. Deep sea depositional systems in the Ischia Island are well developed in correspondence to the Southern Ischia canyon system. The canyon system engraves a narrow continental shelf from Punta Imperatore to Punta San Pancrazio, being limited southwestwards from the relict volcanic edifice of the Ischia bank. While the eastern boundary of the canyon system is controlled by extensional tectonics, being limited from a NE-SW trending (counter-Apenninic) normal fault, its western boundary is controlled by volcanism, due to the growth of the Ischia volcanic bank. Submarine gravitational instabilities also acted in relationships to the canyon system, allowing for the individuation of large scale creeping at the sea bottom and hummocky deposits already interpreted as debris avalanche deposits. High resolution seismic data (Subbottom Chirp) coupled to high resolution Multibeam bathymetry collected in the frame of the Stromboli geophysical experiment aimed at recording seismic active data and tomography of the Stromboli Island are here presented. A new detailed swath bathymetry of Stromboli Island is here shown and discussed to reconstruct an up-to-date morpho-bathymetry and marine geology of the area, compared to volcanologic setting of the Aeolian volcanic complex. The Stromboli DEM gives information about the submerged structure of the volcano, particularly about the volcano-tectonic and gravitational processes involving the submarine flanks of the edifice. Several seismic units have been identified around the volcanic edifice and interpreted as volcanic acoustic basement pertaining to the volcano and overlying slide chaotic bodies emplaced during its complex volcano-tectonic evolution. They are related to the eruptive activity of Stromboli, mainly poliphasic and to regional geological processes involving the geology of the Aeolian Arc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amphibian yolk platelets are composed of lipoprotein subunits arranged in an ordered crystalline structure. Freeze-etch electron microscopy of isolated Xenopus platelets provides a direct view of the structure of the crystal and aids the interpretation of fracture phenomena in lipoprotein systems. A study has been made both of fracture faces and of faces produced by fracturing and etching following partial dissolution of platelets in electrolyte solutions. In freeze-etch replicas, main body crystals appear to be composed of dimers. Rectangular and semihexagonal patterns are seen in fracture faces. Rectangular patterns are seen also in faces produced by partial dissolution and revealed by fracturing and etching. Dissolution faces with possible semihexagonal patterns are distinct but infrequent. Based on this evidence, a new closest-packing model of platelet structure is proposed using lipovitellin dimers as building blocks, with one molecule of the second major protein component, phosvitin, associated with each monomer of the lipovitellin dimer. © 1972 Academic Press, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review the reservoirs of methane clathrates that may exist in the different bodies of the Solar System. Methane was formed in the interstellar medium prior to having been embedded in the protosolar nebula gas phase. This molecule was subsequently trapped in clathrates that formed from crystalline water ice during the cooling of the disk and incorporated in this form into the building blocks of comets, icy bodies, and giant planets. Methane clathrates may play an important role in the evolution of planetary atmospheres. On Earth, the production of methane in clathrates is essentially biological, and these compounds are mostly found in permafrost regions or in the sediments of continental shelves. On Mars, methane would more likely derive from hydrothermal reactions with olivine-rich material. If they do exist, martian methane clathrates would be stable only at depth in the cryosphere and sporadically release some methane into the atmosphere via mechanisms that remain to be determined. In the case of Titan, most of its methane probably originates from the protosolar nebula, where it would have been trapped in the clathrates agglomerated by the satellite's building blocks. Methane clathrates are still believed to play an important role in the present state of Titan. Their presence is invoked in the satellite's subsurface as a means of replenishing its atmosphere with methane via outgassing episodes. The internal oceans of Enceladus and Europa also provide appropriate thermodynamic conditions that allow formation of methane clathrates. In turn, these clathrates might influence the composition of these liquid reservoirs. Finally, comets and Kuiper Belt Objects might have formed from the agglomeration of clathrates and pure ices in the nebula. The methane observed in comets would then result from the destabilization of clathrate layers in the nuclei concurrent with their approach to perihelion. Thermodynamic equilibrium calculations show that methane-rich clathrate layers may exist on Pluto as well. Key Words: Methane clathrate-Protosolar nebula-Terrestrial planets-Outer Solar System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in FPGA technology and higher processing capabilities requirements have pushed to the emerge of All Programmable Systems-on-Chip, which incorporate a hard designed processing system and a programmable logic that enable the development of specialized computer systems for a wide range of practical applications, including data and signal processing, high performance computing, embedded systems, among many others. To give place to an infrastructure that is capable of using the benefits of such a reconfigurable system, the main goal of the thesis is to implement an infrastructure composed of hardware, software and network resources, that incorporates the necessary services for the operation, management and interface of peripherals, that coompose the basic building blocks for the execution of applications. The project will be developed using a chip from the Zynq-7000 All Programmable Systems-on-Chip family.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years there has been a clear evolution in the world of telecommunications, which goes from new services that need higher speeds and higher bandwidth, until a role of interactions between people and machines, named by Internet of Things (IoT). So, the only technology able to follow this growth is the optical communications. Currently the solution that enables to overcome the day-by-day needs, like collaborative job, audio and video communications and share of les is based on Gigabit-capable Passive Optical Network (G-PON) with the recently successor named Next Generation Passive Optical Network Phase 2 (NG-PON2). This technology is based on the multiplexing domain wavelength and due to its characteristics and performance becomes the more advantageous technology. A major focus of optical communications are Photonic Integrated Circuits (PICs). These can include various components into a single device, which simpli es the design of the optical system, reducing space and power consumption, and improves reliability. These characteristics make this type of devices useful for several applications, that justi es the investments in the development of the technology into a very high level of performance and reliability in terms of the building blocks. With the goal to develop the optical networks of future generations, this work presents the design and implementation of a PIC, which is intended to be a universal transceiver for applications for NG-PON2. The same PIC will be able to be used as an Optical Line Terminal (OLT) or an Optical Network Unit (ONU) and in both cases as transmitter and receiver. Initially a study is made of Passive Optical Network (PON) and its standards. Therefore it is done a theoretical overview that explores the materials used in the development and production of this PIC, which foundries are available, and focusing in SMART Photonics, the components used in the development of this chip. For the conceptualization of the project di erent architectures are designed and part of the laser cavity is simulated using Aspic™. Through the analysis of advantages and disadvantages of each one, it is chosen the best to be used in the implementation. Moreover, the architecture of the transceiver is simulated block by block through the VPItransmissionMaker™ and it is demonstrated its operating principle. Finally it is presented the PIC implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. The main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. The building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (Phase I). Each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). We further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (Phase II). We use ten fixed solutions as our target and measure how well we cover these specific scenarios.