844 resultados para building blocks of effective teams
Resumo:
Intramolecular C–H insertion reactions of α-diazocarbonyl compounds typically proceed with preferential five-membered ring formation. However, the presence of a heteroatom such as nitrogen can activate an adjacent C–H site toward insertion resulting in regiocontrol issues. In the case of α-diazoacetamide derivatives, both β- and γ-lactam products are possible owing to this activating effect. Both β- and γ-lactam products are powerful synthetic building blocks in the area of organic synthesis, as well as a common scaffold in a range of natural and pharmaceutical products and therefore C–H insertion reactions to form such compounds are attractive processes.
Resumo:
The stratigraphic architecture of deep sea depositional systems has been discussed in detail. Some examples in Ischia and Stromboli volcanic islands (Southern Tyrrhenian sea, Italy) are here shown and discussed. The submarine slope and base of slope depositional systems represent a major component of marine and lacustrine basin fills, constituting primary targets for hydrocarbon exploration and development. The slope systems are characterized by seven seismic facies building blocks, including the turbiditic channel fills, the turbidite lobes, the sheet turbidites, the slide, slump and debris flow sheets, lobes and tongues, the fine-grained turbidite fills and sheets, the contourite drifts and finally, the hemipelagic drapes and fills. Sparker profiles offshore Ischia are presented. New seismo-stratigraphic evidence on buried volcanic structures and overlying Quaternary deposits of the eastern offshore of the Ischia Island are here discussed to highlight the implications on marine geophysics and volcanology. Regional seismic sections in the Ischia offshore across buried volcanic structures and debris avalanche and debris flow deposits are here presented and discussed. Deep sea depositional systems in the Ischia Island are well developed in correspondence to the Southern Ischia canyon system. The canyon system engraves a narrow continental shelf from Punta Imperatore to Punta San Pancrazio, being limited southwestwards from the relict volcanic edifice of the Ischia bank. While the eastern boundary of the canyon system is controlled by extensional tectonics, being limited from a NE-SW trending (counter-Apenninic) normal fault, its western boundary is controlled by volcanism, due to the growth of the Ischia volcanic bank. Submarine gravitational instabilities also acted in relationships to the canyon system, allowing for the individuation of large scale creeping at the sea bottom and hummocky deposits already interpreted as debris avalanche deposits. High resolution seismic data (Subbottom Chirp) coupled to high resolution Multibeam bathymetry collected in the frame of the Stromboli geophysical experiment aimed at recording seismic active data and tomography of the Stromboli Island are here presented. A new detailed swath bathymetry of Stromboli Island is here shown and discussed to reconstruct an up-to-date morpho-bathymetry and marine geology of the area, compared to volcanologic setting of the Aeolian volcanic complex. The Stromboli DEM gives information about the submerged structure of the volcano, particularly about the volcano-tectonic and gravitational processes involving the submarine flanks of the edifice. Several seismic units have been identified around the volcanic edifice and interpreted as volcanic acoustic basement pertaining to the volcano and overlying slide chaotic bodies emplaced during its complex volcano-tectonic evolution. They are related to the eruptive activity of Stromboli, mainly poliphasic and to regional geological processes involving the geology of the Aeolian Arc.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Background and Purpose - Loss of motor function is common after stroke and leads to significant chronic disability. Stem cells are capable of self-renewal and of differentiating into multiple cell types, including neurones, glia, and vascular cells. We assessed the safety of granulocyte-colony-stimulating factor (G-CSF) after stroke and its effect on circulating CD34 stem cells. Methods - We performed a 2-center, dose-escalation, double-blind, randomized, placebo-controlled pilot trial (ISRCTN 16784092) of G-CSF (6 blocks of 1 to 10 g/kg SC, 1 or 5 daily doses) in 36 patients with recent ischemic stroke. Circulating CD34 stem cells were measured by flow cytometry; blood counts and measures of safety and functional outcome were also monitored. All measures were made blinded to treatment. Results - Thirty-six patients, whose mean SD age was 768 years and of whom 50% were male, were recruited. G-CSF (5 days of 10 g/kg) increased CD34 count in a dose-dependent manner, from 2.5 to 37.7 at day 5 (area under curve, P0.005). A dose-dependent rise in white cell count (P0.001) was also seen. There was no difference between treatment groups in the number of patients with serious adverse events: G-CSF, 7/24 (29%) versus placebo 3/12 (25%), or in their dependence (modified Rankin Scale, median 4, interquartile range, 3 to 5) at 90 days. Conclusions - ”G-CSF is effective at mobilizing bone marrow CD34 stem cells in patients with recent ischemic stroke. Administration is feasible and appears to be safe and well tolerated. The fate of mobilized cells and their effect on functional outcome remain to be determined. (Stroke. 2006;37:2979-2983.)
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
Résumé : La gestion des ressources humaines dans les écoles situées au sein de communautés autochtones est marquée par différents enjeux d’ordres social, culturel, ethnoculturel, économique et administratif qui impactent les pratiques de leurs directions. Ceux-ci touchent à tous les aspects de la gestion des écoles et peuvent être révélateurs d’un malaise dans l’encadrement des actrices et des acteurs à travers des structures administratives, juridiques, éducatives ou de gouvernance qui comportent des défis relationnels et interactionnels majeurs. Ce type de malaise peut moduler les actions des actrices et des acteurs des établissements et peut entrainer des impacts dans leurs relations, notamment au niveau de leurs relations de confiance, essentielles à la qualité de leurs actions communes. L’approfondissement de cette problématique porte essentiellement sur les conditions associées à la construction de la confiance qui sont de différents ordres, c’est-à-dire contextuel, institutionnel, organisationnel, relationnel ou individuel. Utilisant une approche qualitative, cette recherche repose sur vingt-trois entrevues semi-dirigées avec des directions d’établissement provenant de dix-sept communautés et de trois nations autochtones différentes. L’analyse est menée à partir d’une approche exploratoire constructiviste et interprétativiste. Les conclusions permettent de dégager que la construction de relations de confiance entre des actrices et des acteurs sont tributaires de conditions dans lesquelles s’inscrivent des dynamiques interactionnelles particulières. Influencées par le contexte autochtone singulier, ces conditions sont préalables aux actrices et aux acteurs ou associées à leurs comportements, attitudes, actions ou pratiques. Il apparait que ces dynamiques s’inscrivent dans une configuration des équipes-écoles se caractérisant par six catégories-types d’individus qui se déclinent selon leur origine et leur appartenance ou leur identité ethnique, à savoir les voyageurs autochtones et allochtones, les étrangers autochtones et allochtones et les natifs autochtones et allochtones. La meilleure compréhension de cette organisation conduit à une conception large de la configuration des dynamiques interactionnelles entre des individus et des groupes et entre des communautés d’individus. Ces individus s’affilient spécifiquement selon des identités ou des appartenances individuelles ou de groupe qui peuvent être de différents ordres soit particulièrement, mais non exclusivement, ethnique, linguistique, familial ou se rapportant à des croyances particulières.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
The whole-cell immobilization of D-hydantoinase-engineered Escherichia coli for D-CpHPG biosynthesis
Resumo:
Background: D-Hydroxyphenylglycine is considered to be an important chiral molecular building-block of antibiotic reagents such as pesticides, and β-lactam antibiotics. The process of its production is catalyzed by D-hydantoinase and D-carbamoylase in a two-step enzyme reaction. How to enhance the catalytic potential of the two enzymes is valuable for industrial application. In this investigation, an Escherichia coli strain genetically engineered with D-hydantoinase was immobilized by calcium alginate with certain adjuncts to evaluate the optimal condition for the biosynthesis of D-carbamoyl-p-hydroxyphenylglycine (D-CpHPG), the compound further be converted to D-hydroxyphenylglycine (D-HPG) by carbamoylase. Result: The optimal medium to produce D-CpHPG by whole-cell immobilization was a modified Luria-Bertani (LB) added with 3.0% (W/V) alginate, 1.5% (W/V) diatomite, 0.05% (W/V) CaCl2 and 1.00 mM MnCl2. The optimized diameter of immobilized beads for the whole-cell biosynthesis here was 2.60 mm. The maximized production rates of D-CpHPG were up to 76%, and the immobilized beads could be reused for 12 batches. Conclusions: This investigation not only provides an effective procedure for biological production of D-CpHPG, but gives an insight into the whole-cell immobilization technology. © 2016 Pontificia Universidad Católica de Valparaíso. Production and hosting by Elsevier B.V. All rights reserved.
Resumo:
In the last three decades, there has been a broad academic and industrial interest in conjugated polymers as semiconducting materials for organic electronics. Their applications in polymer light-emitting diodes (PLEDs), polymer solar cells (PSCs), and organic field-effect transistors (OFETs) offer opportunities for the resolution of energy issues as well as the development of display and information technologies1. Conjugated polymers provide several advantages including low cost, light weight, good flexibility, as well as solubility which make them readily processed and easily printed, removing the conventional photolithography for patterning2. A large library of polymer semiconductors have been synthesized and investigated with different building blocks, such as acenes or thiophene and derivatives, which have been employed to design new materials according to individual demands for specific applications. To design ideal conjugated polymers for specific applications, some general principles should be taken into account, including (i) side chains (ii) molecular weights, (iii) band gap and HOMO and LUMO energy levels, and (iv) suited morphology.3-6 The aim of this study is to elucidate the impact that substitution exerts on the molecular and electronic structure of π-conjugated polymers with outstanding performances in organic electronic devices. Different configurations of the π-conjugated backbones are analyzed: (i) donor-acceptor configuration, (ii) 1D lineal or 2D branched conjugated backbones, and (iii) encapsulated polymers (see Figure 1). Our combined vibrational spectroscopy and DFT study shows that small changes in the substitution pattern and in the molecular configuration have a strong impact on the electronic characteristics of these polymers. We hope this study can advance useful structure-property relationships of conjugated polymers and guide the design of new materials for organic electronic applications.
Resumo:
Estimates of effective population size in the Holstein cattle breed have usually been low despite the large number of animals that constitute this breed. Effective population size is inversely related to the rates at which coancestry and inbreeding increase and these rates have been high as a consequence of intense and accurate selection. Traditionally, coancestry and inbreeding coefficients have been calculated from pedigree data. However, the development of genome-wide single nucleotide polymorphisms has increased the interest of calculating these coefficients from molecular data in order to improve their accuracy. In this study, genomic estimates of coancestry, inbreeding and effective population size were obtained in the Spanish Holstein population and then compared with pedigree-based estimates. A total of 11,135 animals genotyped with the Illumina BovineSNP50 BeadChip were available for the study. After applying filtering criteria, the final genomic dataset included 36,693 autosomal SNPs and 10,569 animals. Pedigree data from those genotyped animals included 31,203 animals. These individuals represented only the last five generations in order to homogenise the amount of pedigree information across animals. Genomic estimates of coancestry and inbreeding were obtained from identity by descent segments (coancestry) or runs of homozygosity (inbreeding). The results indicate that the percentage of variance of pedigree-based coancestry estimates explained by genomic coancestry estimates was higher than that for inbreeding. Estimates of effective population size obtained from genome-wide and pedigree information were consistent and ranged from about 66 to 79. These low values emphasize the need of controlling the rate of increase of coancestry and inbreeding in Holstein selection programmes.
Resumo:
The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.