14 resultados para Building Blocks for Creative Practice
em Université de Lausanne, Switzerland
Resumo:
The development of forensic intelligence relies on the expression of suitable models that better represent the contribution of forensic intelligence in relation to the criminal justice system, policing and security. Such models assist in comparing and evaluating methods and new technologies, provide transparency and foster the development of new applications. Interestingly, strong similarities between two separate projects focusing on specific forensic science areas were recently observed. These observations have led to the induction of a general model (Part I) that could guide the use of any forensic science case data in an intelligence perspective. The present article builds upon this general approach by focusing on decisional and organisational issues. The article investigates the comparison process and evaluation system that lay at the heart of the forensic intelligence framework, advocating scientific decision criteria and a structured but flexible and dynamic architecture. These building blocks are crucial and clearly lay within the expertise of forensic scientists. However, it is only part of the problem. Forensic intelligence includes other blocks with their respective interactions, decision points and tensions (e.g. regarding how to guide detection and how to integrate forensic information with other information). Formalising these blocks identifies many questions and potential answers. Addressing these questions is essential for the progress of the discipline. Such a process requires clarifying the role and place of the forensic scientist within the whole process and their relationship to other stakeholders.
Resumo:
Amino acids form the building blocks of all proteins. Naturally occurring amino acids are restricted to a few tens of sidechains, even when considering post-translational modifications and rare amino acids such as selenocysteine and pyrrolysine. However, the potential chemical diversity of amino acid sidechains is nearly infinite. Exploiting this diversity by using non-natural sidechains to expand the building blocks of proteins and peptides has recently found widespread applications in biochemistry, protein engineering and drug design. Despite these applications, there is currently no unified online bioinformatics resource for non-natural sidechains. With the SwissSidechain database (http://www.swisssidechain.ch), we offer a central and curated platform about non-natural sidechains for researchers in biochemistry, medicinal chemistry, protein engineering and molecular modeling. SwissSidechain provides biophysical, structural and molecular data for hundreds of commercially available non-natural amino acid sidechains, both in l- and d-configurations. The database can be easily browsed by sidechain names, families or physico-chemical properties. We also provide plugins to seamlessly insert non-natural sidechains into peptides and proteins using molecular visualization software, as well as topologies and parameters compatible with molecular mechanics software.
Resumo:
The European Variscan and Alpine mountain chains are collisional orogens, and are built up of pre-Variscan ``building blocks'' which, in most. cases, originated at the Gondwana margin. Such pre-Variscan elements were part of a pre-Ordovician archipelago-like continental ribbon in the former eastern prolongation of Avalonia, and their present-day distribution resulted from juxtaposition through Variscan and/or Alpine tectonic evolution. The well-known nomenclatures applied to these mountain chains are the mirror of Variscan resp. Alpine organization. It is the aim of this paper to present a terminology taking into account their pre-Variscan evolution at the Gondwana margin. They may contain relics of volcanic islands with pieces of Cadomian crust, relics of volcanic arc settings, and accretionary wedges, which were separated from Gondwana by initial stages of Rheic ocean. opening. After a short-lived Ordovician orogenic event and amalgamation of these elements at the Gondwanan margin, the still continuing Gondwana-directed subduction triggered the formation of Ordovician Al-rich granitoids and; the latest Ordovician opening of Palaeo-Tethys. An example from the Alps (External Massifs) illustrates the gradual reworking of Gondwana-derived, pre-Variscan. elements during the Variscan and Alpine/ Tertiary orogenic cycles. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The synthesis of /-L-fucosylated cysteamine, 3-thiopropionic acid, and 3-thioacetic acid derivatives as building blocks for the preparation of S-neofucopeptides is shown. These compounds were used in the synthesis of new thiofucosides derivatives (8, 9, 9, 10, 22, 22, 24, 26) that show affinity towards E- and P-selectins. They constitute a new series of hydrolytically stable and low-molecular-weight mimetics of the natural SLex tetrasaccharide.
Resumo:
Abstract This PhD thesis addresses the issue of alleviating the burden of developing ad hoc applications. Such applications have the particularity of running on mobile devices, communicating in a peer-to-peer manner and implement some proximity-based semantics. A typical example of such application can be a radar application where users see their avatar as well as the avatars of their friends on a map on their mobile phone. Such application become increasingly popular with the advent of the latest generation of mobile smart phones with their impressive computational power, their peer-to-peer communication capabilities and their location detection technology. Unfortunately, the existing programming support for such applications is limited, hence the need to address this issue in order to alleviate their development burden. This thesis specifically tackles this problem by providing several tools for application development support. First, it provides the location-based publish/subscribe service (LPSS), a communication abstraction, which elegantly captures recurrent communication issues and thus allows to dramatically reduce the code complexity. LPSS is implemented in a modular manner in order to be able to target two different network architectures. One pragmatic implementation is aimed at mainstream infrastructure-based mobile networks, where mobile devices can communicate through fixed antennas. The other fully decentralized implementation targets emerging mobile ad hoc networks (MANETs), where no fixed infrastructure is available and communication can only occur in a peer-to-peer fashion. For each of these architectures, various implementation strategies tailored for different application scenarios that can be parametrized at deployment time. Second, this thesis provides two location-based message diffusion protocols, namely 6Shot broadcast and 6Shot multicast, specifically aimed at MANETs and fine tuned to be used as building blocks for LPSS. Finally this thesis proposes Phomo, a phone motion testing tool that allows to test proximity semantics of ad hoc applications without having to move around with mobile devices. These different developing support tools have been packaged in a coherent middleware framework called Pervaho.
Resumo:
Integrated approaches using different in vitro methods in combination with bioinformatics can (i) increase the success rate and speed of drug development; (ii) improve the accuracy of toxicological risk assessment; and (iii) increase our understanding of disease. Three-dimensional (3D) cell culture models are important building blocks of this strategy which has emerged during the last years. The majority of these models are organotypic, i.e., they aim to reproduce major functions of an organ or organ system. This implies in many cases that more than one cell type forms the 3D structure, and often matrix elements play an important role. This review summarizes the state of the art concerning commonalities of the different models. For instance, the theory of mass transport/metabolite exchange in 3D systems and the special analytical requirements for test endpoints in organotypic cultures are discussed in detail. In the next part, 3D model systems for selected organs--liver, lung, skin, brain--are presented and characterized in dedicated chapters. Also, 3D approaches to the modeling of tumors are presented and discussed. All chapters give a historical background, illustrate the large variety of approaches, and highlight up- and downsides as well as specific requirements. Moreover, they refer to the application in disease modeling, drug discovery and safety assessment. Finally, consensus recommendations indicate a roadmap for the successful implementation of 3D models in routine screening. It is expected that the use of such models will accelerate progress by reducing error rates and wrong predictions from compound testing.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
Regulatory gene networks contain generic modules, like those involving feedback loops, which are essential for the regulation of many biological functions (Guido et al. in Nature 439:856-860, 2006). We consider a class of self-regulated genes which are the building blocks of many regulatory gene networks, and study the steady-state distribution of the associated Gillespie algorithm by providing efficient numerical algorithms. We also study a regulatory gene network of interest in gene therapy, using mean-field models with time delays. Convergence of the related time-nonhomogeneous Markov chain is established for a class of linear catalytic networks with feedback loops.
Resumo:
Biomolecular structures are assemblies of emergent anisotropic building modules such as uniaxial helices or biaxial strands. We provide an approach to understanding a marginally compact phase of matter that is occupied by proteins and DNA. This phase, which is in some respects analogous to the liquid crystal phase for chain molecules, stabilizes a range of shapes that can be obtained by sequence-independent interactions occurring intra- and intermolecularly between polymeric molecules. We present a singularity-free self-interaction for a tube in the continuum limit and show that this results in the tube being positioned in the marginally compact phase. Our work provides a unified framework for understanding the building blocks of biomolecules.
Resumo:
Glucose is the primary source of energy for the brain but also an important source of building blocks for proteins, lipids, and nucleic acids. Little is known about the use of glucose for biosynthesis in tissues at the cellular level. We demonstrate that local cerebral metabolic activity can be mapped in mouse brain tissue by quantitatively imaging the biosynthetic products deriving from [U-(13)C]glucose metabolism using a combination of in situ electron microscopy and secondary ion mass-spectroscopy (NanoSIMS). Images of the (13)C-label incorporated into cerebral ultrastructure with ca. 100nm resolution allowed us to determine the timescale on which the metabolic products of glucose are incorporated into different cells, their sub-compartments and organelles. These were mapped in astrocytes and neurons in the different layers of the motor cortex. We see evidence for high metabolic activity in neurons via the nucleus (13)C enrichment. We observe that in all the major cell compartments, such as e.g. nucleus and Golgi apparatus, neurons incorporate substantially higher concentrations of (13)C-label than astrocytes.
Resumo:
NanoImpactNet (NIN) is a multidisciplinary European Commission funded network on the environmental, health and safety (EHS) impact of nanomaterials. The 24 founding scientific institutes are leading European research groups active in the fields of nanosafety, nanorisk assessment and nanotoxicology. This 4-year project is the new focal point for information exchange within the research community. Contact with other stakeholders is vital and their needs are being surveyed. NIN is communicating with 100s of stakeholders: businesses; internet platforms; industry associations; regulators; policy makers; national ministries; international agencies; standard-setting bodies and NGOs concerned by labour rights, EHS or animal welfare. To improve this communication, internet research, a questionnaire distributed via partners and targeted phone calls were used to identify stakeholders' interests and needs. Knowledge gaps and the necessity for further data mentioned by representatives of all stakeholder groups in the targeted phone calls concerned: • the potential toxic and safety hazards of nanomaterials throughout their lifecycles; • the fate and persistence of nanoparticles in humans, animals and the environment; • the associated risks of nanoparticle exposure; • greater participation in: the preparation of nomenclature, standards, methodologies, protocols and benchmarks; • the development of best practice guidelines; • voluntary schemes on responsibility; • databases of materials, research topics and themes, but also of expertise. These findings suggested that stakeholders and NIN researchers share very similar knowledge needs, and that open communication and free movement of knowledge will benefit both researchers and industry. Subsequently a workshop was organised by NIN focused on building a sustainable multi-stakeholder dialogue. Specific questions were asked to different stakeholder groups to encourage discussions and open communication. 1. What information do stakeholders need from researchers and why? The discussions about this question confirmed the needs identified in the targeted phone calls. 2. How to communicate information? While it was agreed that reporting should be enhanced, commercial confidentiality and economic competition were identified as major obstacles. It was recognised that expertise was needed in the areas of commercial law and economics for a wellinformed treatment of this communication issue. 3. Can engineered nanomaterials be used safely? The idea that nanomaterials are probably safe because some of them have been produced 'for a long time', was questioned, since many materials in common use have been proved to be unsafe. The question of safety is also about whether the public has confidence. New legislation like REACH could help with this issue. Hazards do not materialise if exposure can be avoided or at least significantly reduced. Thus, there is a need for information on what can be regarded as acceptable levels of exposure. Finally, it was noted that there is no such thing as a perfectly safe material but only boundaries. At this moment we do not know where these boundaries lie. The matter of labelling of products containing nanomaterials was raised, as in the public mind safety and labelling are connected. This may need to be addressed since the issue of nanomaterials in food, drink and food packaging may be the first safety issue to attract public and media attention, and this may have an impact on 'nanotechnology as a whole. 4. Do we need more or other regulation? Any decision making process should accommodate the changing level of uncertainty. To address the uncertainties, adaptations of frameworks such as REACH may be indicated for nanomaterials. Regulation is often needed even if voluntary measures are welcome because it mitigates the effects of competition between industries. Data cannot be collected on voluntary bases for example. NIN will continue with an active stakeholder dialogue to further build on interdisciplinary relationships towards a healthy future with nanotechnology.
Resumo:
This article draws on empirical material to reflect on what drives rapid change in flood risk management practice, reflecting wider interest in the way that scientific practices make risk landscapes and a specific focus on extreme events as drivers of rapid change. Such events are commonly referred to as a form of creative destruction, ones that reveal both the composition of socioenvironmental assemblages and provide a creative opportunity to remake those assemblages in alternate ways, therefore rapidly changing policy and practice. Drawing on wider thinking in complexity theory, we argue that what happens between events might be as, if not more, important than the events themselves. We use two empirical examples concerned with flood risk management practice: a rapid shift in the dominant technologies used to map flood risk in the United Kingdom and an experimental approach to public participation tested in two different locations, with dramatically different consequences. Both show that the state of the socioenvironmental assemblage in which the events take place matters as much as the magnitude of the events themselves. The periods between rapid changes are not simply periods of discursive consolidation but involve the ongoing mutation of such assemblages, which could either sensitize or desensitize them to rapid change. Understanding these intervening periods matters as much as the events themselves. If events matter, it is because of the ways in which they might bring into sharp focus the coding or framing of a socioenvironmental assemblage in policy or scientific practice irrespective of whether or not those events evolve the assemblage in subtle or more radical ways.
Resumo:
PRINCIPLES: The literature has described opinion leaders not only as marketing tools of the pharmaceutical industry, but also as educators promoting good clinical practice. This qualitative study addresses the distinction between the opinion-leader-as-marketing-tool and the opinion-leader-as-educator, as it is revealed in the discourses of physicians and experts, focusing on the prescription of antidepressants. We explore the relational dynamic between physicians, opinion leaders and the pharmaceutical industry in an area of French-speaking Switzerland. METHODS: Qualitative content analysis of 24 semistructured interviews with physicians and local experts in psychopharmacology, complemented by direct observation of educational events led by the experts, which were all sponsored by various pharmaceutical companies. RESULTS: Both physicians and experts were critical of the pharmaceutical industry and its use of opinion leaders. Local experts, in contrast, were perceived by the physicians as critical of the industry and, therefore, as a legitimate source of information. Local experts did not consider themselves opinion leaders and argued that they remained intellectually independent from the industry. Field observations confirmed that local experts criticised the industry at continuing medical education events. CONCLUSIONS: Local experts were vocal critics of the industry, which nevertheless sponsor their continuing education. This critical attitude enhanced their credibility in the eyes of the prescribing physicians. We discuss how the experts, despite their critical attitude, might still be beneficial to the industry's interests.
Resumo:
This report synthesizes the findings of 11 country reports on policy learning in labour market and social policies that were conducted as part of WP5 of the INSPIRES project, which is funded by the 7th Framework Program of the EU-Commission. Notably, this report puts forward objectives of policy learning, discusses tools, processes and institutions of policy learning and presents the impacts of various tools and structures of the policy learning infrastructure for the actual policy learning process. The report defines three objectives of policy learning: evaluation and assessment of policy effectiveness, vision building and planning, and consensus building. In the 11 countries under consideration, the tools and processes of the policy learning, infrastructure can be classified into three broad groups: public bodies, expert councils, and parties, interest groups and the private sector. Finally, we develop four recommendations for policy learning: Firstly, learning processes should keep the balance between centralisation and plurality. Secondly, learning processes should be kept stable beyond the usual political business cycles. Thirdly, policy learning tools and infrastructures should be sufficiently independent from political influence or bias. Fourth, Policy learning tools and infrastructures should balance out mere effectiveness, evaluation and vision building.