912 resultados para fundamental principles and applications
Resumo:
Mode of access: Internet.
Resumo:
The study of photophysical and photochemical processes crosses the interest of many fields of research in physics, chemistry and biology. In particular, the photophysical and photochemical reactions, after light absorption by a photosynthetic pigment-protein complex, are among the fastest events in biology, taking place on timescales ranging from tens of femtoseconds to a few nanoseconds. Among the experimental approaches developed for this purpose, the advent of ultrafast transient absorption spectroscopy has become a powerful and widely used technique.[1,2] Focusing on the process of photosynthesis, it relies upon the efficient absorption and conversion of the radiant energy from the Sun. Chlorophylls and carotenoids are the main players in the process. Photosynthetic pigments are typically arranged in a highly organized fashion to constitute antennas and reaction centers, supramolecular devices where light harvesting and charge separation take place. The very early steps in the photosynthetic process take place after the absorption of a photon by an antenna system, which harvests light and eventually delivers it to the reaction center. In order to compete with internal conversion, intersystem crossing, and fluorescence, which inevitably lead to energy loss, the energy and electron transfer processes that fix the excited-state energy in photosynthesis must be extremely fast. In order to investigate these events, ultrafast techniques down to a sub-100 fs resolution must be used. In this way, energy migration within the system as well as the formation of new chemical species such as charge-separated states can be tracked in real time. This can be achieved by making use of ultrafast transient absorption spectroscopy. The basic principles of this notable technique, instrumentation, and some recent applications to photosynthetic systems[3] will be described. Acknowledgements M. Moreno Oliva thanks the MINECO for a “Juan de la Cierva-Incorporación” research contract. References [1] U. Megerle, I. Pugliesi, C. Schriever, C.F. Sailer and E. Riedle, Appl. Phys. B, 96, 215 – 231 (2009). [2] R. Berera, R. van Grondelle and J.T.M. Kennis, Photosynth. Res., 101, 105 – 118 (2009). [3] T. Nikkonen, M. Moreno Oliva, A. Kahnt, M. Muuronen, J. Helaja and D.M. Guldi, Chem. Eur. J., 21, 590 – 600 (2015).
Resumo:
The sampling of volatile organic compounds using solid phase microextraction is reviewed and its principles are described. The development and application of solid phase microextraction in the sampling of VOCs are presented and discussed.
Resumo:
Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.
The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.
The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).
The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.
The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.
In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
In the present work we review the way in which the electron-matter interaction allows us to perform electron energy loss spectroscopy (EELS), as well as the latest developments in the technique and some of the most relevant results of EELS as a characterization tool in nanoscience and nanotechnology.
Resumo:
The development of new drug delivery systems to target the anterior segment of the eye may offer many advantages: to increase the biodisponibility of the drug, to allow the penetration of drug that cannot be formulated as solutions, to obtain constant and sustained drug release, to achieve higher local concentrations without systemic effects, to target more specifically one tissue or cell type, to reduce the frequency of instillation and therefore increase the observance and comfort of the patient while reducing side effects of frequent instillation. Several approaches are developed, aiming to increase the corneal contact time by modified formulation or reservoir systems, or by increasing the tissue permeability using iontophoresis. To date, no ocular drug delivery system is ideal for all purposes. To maximize treatment efficacy, careful evaluation of the specific pathological condition, the targeted Intraocular tissue and the location of the most severe pathology must be made before selecting the method of delivery most suitable for each individual patient.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been used successfully in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits; to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been successfully used in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits, to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
Two technical solutions using single or dual shot offer different advantages and disadvantages for dual energy subtraction. The principles of these are explained and the main clinical applications with results are demonstrated. Elimination of overlaying bone and proof or exclusion of calcification are the primary aims of energy subtraction chest radiography, offering unique information in different clinical situations.
Resumo:
Bibliography: p. 1