909 resultados para Concept-based Terminology
Resumo:
Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.
The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.
The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).
The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.
The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.
In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.
Resumo:
[EN] A new concept for fluid flow manipulation in microfluidic paper-based analytical devices (m-PADs) is presented by introducing ionogel materials as passive pumps. m-PADs were fabricated using a new doubleside contact stamping process and ionogels were precisely photopolymerised at the inlet of the m-PADs.The ionogels remain mainly on the surface of the paper and get absorbed in the superficial paper-fibers allowing for the liquid to flow from the ionogel into the paper easily. As a proof of concept the fluid flowand mixing behaviour of two different ionogels mPADs were compared with the non-treated mPADs.It was demonstrated that both ionogels highly affect the fluid flow by delaying the flow due to their different physical and chemical properties and water holding capacities.
Resumo:
This Document is Protected by copyright and was first published by Frontiers. All rights reserved. It is reproduced with permission.
Resumo:
The new generation of artificial satellites is providing a huge amount of Earth observation images whose exploitation can report invaluable benefits, both economical and environmental. However, only a small fraction of this data volume has been analyzed, mainly due to the large human resources needed for that task. In this sense, the development of unsupervised methodologies for the analysis of these images is a priority. In this work, a new unsupervised segmentation algorithm for satellite images is proposed. This algorithm is based on the rough-set theory, and it is inspired by a previous segmentation algorithm defined in the RGB color domain. The main contributions of the new algorithm are: (i) extending the original algorithm to four spectral bands; (ii) the concept of the superpixel is used in order to define the neighborhood similarity of a pixel adapted to the local characteristics of each image; (iii) and two new region merged strategies are proposed and evaluated in order to establish the final number of regions in the segmented image. The experimental results show that the proposed approach improves the results provided by the original method when both are applied to satellite images with different spectral and spatial resolutions.
Resumo:
This study explores the effects of modeling instruction on student learning in physics. Multiple representations grounded in physical contexts were employed by students to analyze the results of inquiry lab investigations. Class whiteboard discussions geared toward a class consensus following Socratic dialogue were implemented throughout the modeling cycle. Lab investigations designed to address student preconceptions related to Newton’s Third Law were implemented. Student achievement was measured based on normalized gains on the Force Concept Inventory. Normalized FCI gains achieved by students in this study were comparable to those achieved by students of other novice modelers. Physics students who had taken a modeling Intro to Physics course scored significantly higher on the FCI posttest than those who had not. The FCI results also provided insight into deeply rooted student preconceptions related to Newton’s Third Law. Implications for instruction and the design of lab investigations related to Newton’s Third Law are discussed.
Resumo:
Intracochlear trauma from surgical insertion of bulky electrode arrays and inadequate pitch perception are areas of concern with current hand-assembled commercial cochlear implants. Parylene thin-film arrays with higher electrode densities and lower profiles are a potential solution, but lack rigidity and hence depend on manually fabricated permanently attached polyethylene terephthalate (PET) tubing based bulky backing devices. As a solution, we investigated a new backing device with two sub-systems. The first sub-system is a thin poly(lactic acid) (PLA) stiffener that will be embedded in the parylene array. The second sub-system is an attaching and detaching mechanism, utilizing a poly(N-vinylpyrrolidone)-block-poly(d,l-lactide) (PVP-b-PDLLA) copolymer-based biodegradable and water soluble adhesive, that will help to retract the PET insertion tool after implantation. As a proof-of-concept of sub-system one, a microfabrication process for patterning PLA stiffeners embedded in parylene has been developed. Conventional hotembossing, mechanical micromachining, and standard cleanroom processes were integrated for patterning fully released and discrete stiffeners coated with parylene. The released embedded stiffeners were thermoformed to demonstrate that imparting perimodiolar shapes to stiffener-embedded arrays will be possible. The developed process when integrated with the array fabrication process will allow fabrication of stiffener-embedded arrays in a single process. As a proof-of-concept of sub-system two, the feasibility of the attaching and detaching mechanism was demonstrated by adhering 1x and 1.5x scale PET tube-based insertion tools and PLA stiffeners embedded in parylene using the copolymer adhesive. The attached devices survived qualitative adhesion tests, thermoforming, and flexing. The viability of the detaching mechanism was tested by aging the assemblies in-vitro in phosphate buffer solution. The average detachment times, 2.6 minutes and 10 minutes for 1x and 1.5x scale devices respectively, were found to be clinically relevant with respect to the reported array insertion times during surgical implantation. Eventually, the stiffener-embedded arrays would not need to be permanently attached to current insertion tools which are left behind after implantation and congest the cochlear scala tympani chamber. Finally, a simulation-based approach for accelerated failure analysis of PLA stiffeners and characterization of PVP-b-PDLLA copolymer adhesive has been explored. The residual functional life of embedded PLA stiffeners exposed to body-fluid and thereby subjected to degradation and erosion has been estimated by simulating PLA stiffeners with different parylene coating failure types and different PLA types for a given parylene coating failure type. For characterizing the PVP-b-PDLLA copolymer adhesive, several formulations of the copolymer adhesive were simulated and compared based on the insertion tool detachment times that were predicted from the dissolution, degradation, and erosion behavior of the simulated adhesive formulations. Results indicate that the simulation-based approaches could be used to reduce the total number of time consuming and expensive in-vitro tests that must be conducted.
Resumo:
Traditional engineering design methods are based on Simon's (1969) use of the concept function, and as such collectively suffer from both theoretical and practical shortcomings. Researchers in the field of affordance-based design have borrowed from ecological psychology in an attempt to address the blind spots of function-based design, developing alternative ontologies and design processes. This dissertation presents function and affordance theory as both compatible and complimentary. We first present a hybrid approach to design for technology change, followed by a reconciliation and integration of function and affordance ontologies for use in design. We explore the integration of a standard function-based design method with an affordance-based design method, and demonstrate how affordance theory can guide the early application of function-based design. Finally, we discuss the practical and philosophical ramifications of embracing affordance theory's roots in ecology and ecological psychology, and explore the insights and opportunities made possible by an ecological approach to engineering design. The primary contribution of this research is the development of an integrated ontology for describing and designing technological systems using both function- and affordance-based methods.
Resumo:
Novel or story adaptations and also dramatic texts versions, that need to be translated and updated to modern audiences are quite frequent in today`s theatre. This study aims to show the state of contemporary stage adaptation of narrative texts and, specifically, its evolution in Spain in the last forty years (1972-2012). To do this, I have tried to gather, first, all the terminology associated with the concept of stage adaptation: version, dramaturgy, rewriting, translation, interpretation, updating and consolidation. The theoretical part of the work begins with the various definitions of the concept of dramatization. All the positions reflected by theorists and specialists in the field come together when explaining the term adaptation or theatre version: the intervention on the original text is based on the transformation or change, radical or superficial, for its effective representation in the theatre. In contemporary times, the concept of adaptation applies to any kind of intervention, from the translation of the original (and rewriting) to the dramaturgical work involved in creating a new sense. In turn, any theatre adaptation requires a dramaturgical operation and supports all possible moves: reorganization of the story, breakage, reduced characters, dramatic concentration, incorporation of foreign texts, installation and collage, changes to the plot, etc. Although there is no definitive model for the theatre adaptation of works, several authors and theatrical theorists propose guidelines and types of adaptation to the transformation of a work into another or one genre into a different one; and regarding narrative texts, provide criteria for interpreting the original text. The issue for many authors is the danger of modifying or betraying the sense or the form of the original text, considering it as simple material for the play. Finally, it follows that there is affinity of thought among authors finding that there is no differentiation between adaptation and version: both terms refer to the same in the theatrical event and are also terms used equally for the countless film adaptations of novels and plays...
Resumo:
SpicA FAR infrared Instrument, SAFARI, is one of the instruments planned for the SPICA mission. The SPICA mission is the next great leap forward in space-based far-infrared astronomy and will study the evolution of galaxies, stars and planetary systems. SPICA will utilize a deeply cooled 2.5m-class telescope, provided by European industry, to realize zodiacal background limited performance, and high spatial resolution. The instrument SAFARI is a cryogenic grating-based point source spectrometer working in the wavelength domain 34 to 230 μm, providing spectral resolving power from 300 to at least 2000. The instrument shall provide low and high resolution spectroscopy in four spectral bands. Low Resolution mode is the native instrument mode, while the high Resolution mode is achieved by means of a Martin-Pupplet interferometer. The optical system is all-reflective and consists of three main modules; an input optics module, followed by the Band and Mode Distributing Optics and the grating Modules. The instrument utilizes Nyquist sampled filled linear arrays of very sensitive TES detectors. The work presented in this paper describes the optical design architecture and design concept compatible with the current instrument performance and volume design drivers.
Resumo:
Selling devices on retail stores comes with the big challenge of grabbing the customer’s attention. Nowadays people have a lot of offers at their disposal and new marketing techniques must emerge to differentiate the products. When it comes to smartphones and tablets, those devices can make the difference by themselves, if we use their computing power and capabilities to create something unique and interactive. With that in mind, three prototypes were developed during an internship: a face recognition based Customer Detection, a face tracking solution with an Avatar and interactive cross-app Guides. All three revealed to have potential to be differentiating solutions in a retail store, not only raising the chance of a customer taking notice of the device but also of interacting with them to learn more about their features. The results were meant to be only proof of concepts and therefore were not tested in the real world.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
Esta investigación midió la percepción del personal asistencial sobre la cultura de seguridad de los pacientes en un hospital de primer nivel de complejidad por medio de un estudio descriptivo de corte transversal. Se utilizó como herramienta de medición la encuesta ‘Hospital Survey on Patient Safety Cultura’ (HSOPSC) de la Agency of Healthcare Research and Quality (AHRQ) versión en español, la cual evalúa doce dimensiones. Los resultados mostraron fortalezas como el aprendizaje organizacional, las mejoras continuas y el apoyo de los administradores para la seguridad del paciente. Las dimensiones clasificadas como oportunidades de mejora fueron la cultura no punitiva, el personal, las transferencias y transiciones y el grado en que la comunicación es abierta. Se concluyó que aunque el personal percibía como positivo el proceso de mejoramiento y apoyo de la administración también sentía que era juzgado si reportaba algún evento adverso.
Resumo:
Los estudios de liderazgo han abordado la interacción que existe entre el sujeto denominado líder y sus seguidores. Dentro de dicha relación se han estudiado las habilidades del líder y su impacto como coach. Hoy en día se pueden evidenciar un sinnúmero de estudios y aproximaciones en torno al término coaching, concepto, marcos teóricos, modelos, etc… En el presente artículo se hará un proceso investigativo en el que se define coaching desde el punto de vista de varios autores, expertos y managers que se desarrollan en el ámbito empresarial para poder encontrar una definición que comprenda las dimensiones del mundo organizacional, A continuación, se hará una búsqueda sistemática de las definiciones de coaching y a partir de esta búsqueda se propondrá una definición integradora que dé cuenta de los diversos ámbitos del estudio del liderazgo. Al revisar la terminología en cuanto a liderazgo y coaching, junto con su relación directa no hay una definición que realmente abarque todo el tema organizacional que implica estas dos palabras.
Resumo:
Conceptual interpretation of languages has gathered peak interest in the world of artificial intelligence. The challenge in modeling various complications involved in a language is the main motivation behind our work. Our main focus in this work is to develop conceptual graphical representation for image captions. We have used discourse representation structure to gain semantic information which is further modeled into a graphical structure. The effectiveness of the model is evaluated by a caption based image retrieval system. The image retrieval is performed by computing subgraph based similarity measures. Best retrievals were given an average rating of . ± . out of 4 by a group of 25 human judges. The experiments were performed on a subset of the SBU Captioned Photo Dataset. This purpose of this work is to establish the cognitive sensibility of the approach to caption representations
Resumo:
Conceptual interpretation of languages has gathered peak interest in the world of artificial intelligence. The challenge in modeling various complications involved in a language is the main motivation behind our work. Our main focus in this work is to develop conceptual graphical representation for image captions. We have used discourse representation structure to gain semantic information which is further modeled into a graphical structure. The effectiveness of the model is evaluated by a caption based image retrieval system. The image retrieval is performed by computing subgraph based similarity measures. Best retrievals were given an average rating of . ± . out of 4 by a group of 25 human judges. The experiments were performed on a subset of the SBU Captioned Photo Dataset. This purpose of this work is to establish the cognitive sensibility of the approach to caption representations.