916 resultados para Searching and sorting
Resumo:
Fruits of two varieties of both apples and pear were tested to measure their response to small energy impact applied by a impact tester with two spherical tips of different radious of curvature ( RA = 2.48 cm and RB = 0.98 cm) and equal mass were used. In the four varieties studied, the size of bruise was smaller with a spherical tip RA than with tip RB . The non-destructive impact test would cause less damage with a spherical impactor with a radious bigger than 0.98 cm.
Resumo:
Non-destructive measurement of fruit quality has been an important objective through recent years (Abbott, 1999). Near infrared spectroscopy (NIR) is applicable to the cuantification of chemicals in foods and NIK "laser spectroscopy" can be used to estimate the firmness of fruits. However, die main limitation of current optical techniques that measure light transmission is that they do not account for the coupling between absorption and scattering inside the tissue, when quantifying the intensity o f reemitted light. The solution o f this l i m i t a t i o n was the goal o f the present work.
Resumo:
Neighbourhood representation and scale used to measure the built environment have been treated in many ways. However, it is anything but clear what representation of neighbourhood is the most feasible in the existing literature. This paper presents an exhaustive analysis of built environment attributes through three spatial scales. For this purpose multiple data sources are integrated, and a set of 943 observations is analysed. This paper simultaneously analyses the influence of two methodological issues in the study of the relationship between built environment and travel behaviour: (1) detailed representation of neighbourhood by testing different spatial scales; (2) the influence of unobserved individual sensitivity to built environment attributes. The results show that different spatial scales of built environment attributes produce different results. Hence, it is important to produce local and regional transport measures, according to geographical scale. Additionally, the results show significant sensitivity to built environment attributes depending on place of residence. This effect, called residential sorting, acquires different magnitudes depending on the geographical scale used to measure the built environment attributes. Spatial scales risk to the stability of model results. Hence, transportation modellers and planners must take into account both effects of self-selection and spatial scales.
Resumo:
In order to achieve to minimize car-based trips, transport planners have been particularly interested in understanding the factors that explain modal choices. In the transport modelling literature there has been an increasing awareness that socioeconomic attributes and quantitative variables are not sufficient to characterize travelers and forecast their travel behavior. Recent studies have also recognized that users? social interactions and land use patterns influence travel behavior, especially when changes to transport systems are introduced, but links between international and Spanish perspectives are rarely deal. In this paper, factorial and path analyses through a Multiple-Indicator Multiple-Cause (MIMIC) model are used to understand and describe the relationship between the different psychological and environmental constructs with social influence and socioeconomic variables. The MIMIC model generates Latent Variables (LVs) to be incorporated sequentially into Discrete Choice Models (DCM) where the levels of service and cost attributes of travel modes are also included directly to measure the effect of the transport policies that have been introduced in Madrid during the last three years in the context of the economic crisis. The data used for this paper are collected from a two panel smartphone-based survey (n=255 and 190 respondents, respectively) of Madrid.
Resumo:
La nanotecnología es un área de investigación de reciente creación que trata con la manipulación y el control de la materia con dimensiones comprendidas entre 1 y 100 nanómetros. A escala nanométrica, los materiales exhiben fenómenos físicos, químicos y biológicos singulares, muy distintos a los que manifiestan a escala convencional. En medicina, los compuestos miniaturizados a nanoescala y los materiales nanoestructurados ofrecen una mayor eficacia con respecto a las formulaciones químicas tradicionales, así como una mejora en la focalización del medicamento hacia la diana terapéutica, revelando así nuevas propiedades diagnósticas y terapéuticas. A su vez, la complejidad de la información a nivel nano es mucho mayor que en los niveles biológicos convencionales (desde el nivel de población hasta el nivel de célula) y, por tanto, cualquier flujo de trabajo en nanomedicina requiere, de forma inherente, estrategias de gestión de información avanzadas. Desafortunadamente, la informática biomédica todavía no ha proporcionado el marco de trabajo que permita lidiar con estos retos de la información a nivel nano, ni ha adaptado sus métodos y herramientas a este nuevo campo de investigación. En este contexto, la nueva área de la nanoinformática pretende detectar y establecer los vínculos existentes entre la medicina, la nanotecnología y la informática, fomentando así la aplicación de métodos computacionales para resolver las cuestiones y problemas que surgen con la información en la amplia intersección entre la biomedicina y la nanotecnología. Las observaciones expuestas previamente determinan el contexto de esta tesis doctoral, la cual se centra en analizar el dominio de la nanomedicina en profundidad, así como en el desarrollo de estrategias y herramientas para establecer correspondencias entre las distintas disciplinas, fuentes de datos, recursos computacionales y técnicas orientadas a la extracción de información y la minería de textos, con el objetivo final de hacer uso de los datos nanomédicos disponibles. El autor analiza, a través de casos reales, alguna de las tareas de investigación en nanomedicina que requieren o que pueden beneficiarse del uso de métodos y herramientas nanoinformáticas, ilustrando de esta forma los inconvenientes y limitaciones actuales de los enfoques de informática biomédica a la hora de tratar con datos pertenecientes al dominio nanomédico. Se discuten tres escenarios diferentes como ejemplos de actividades que los investigadores realizan mientras llevan a cabo su investigación, comparando los contextos biomédico y nanomédico: i) búsqueda en la Web de fuentes de datos y recursos computacionales que den soporte a su investigación; ii) búsqueda en la literatura científica de resultados experimentales y publicaciones relacionadas con su investigación; iii) búsqueda en registros de ensayos clínicos de resultados clínicos relacionados con su investigación. El desarrollo de estas actividades requiere el uso de herramientas y servicios informáticos, como exploradores Web, bases de datos de referencias bibliográficas indexando la literatura biomédica y registros online de ensayos clínicos, respectivamente. Para cada escenario, este documento proporciona un análisis detallado de los posibles obstáculos que pueden dificultar el desarrollo y el resultado de las diferentes tareas de investigación en cada uno de los dos campos citados (biomedicina y nanomedicina), poniendo especial énfasis en los retos existentes en la investigación nanomédica, campo en el que se han detectado las mayores dificultades. El autor ilustra cómo la aplicación de metodologías provenientes de la informática biomédica a estos escenarios resulta efectiva en el dominio biomédico, mientras que dichas metodologías presentan serias limitaciones cuando son aplicadas al contexto nanomédico. Para abordar dichas limitaciones, el autor propone un enfoque nanoinformático, original, diseñado específicamente para tratar con las características especiales que la información presenta a nivel nano. El enfoque consiste en un análisis en profundidad de la literatura científica y de los registros de ensayos clínicos disponibles para extraer información relevante sobre experimentos y resultados en nanomedicina —patrones textuales, vocabulario en común, descriptores de experimentos, parámetros de caracterización, etc.—, seguido del desarrollo de mecanismos para estructurar y analizar dicha información automáticamente. Este análisis concluye con la generación de un modelo de datos de referencia (gold standard) —un conjunto de datos de entrenamiento y de test anotados manualmente—, el cual ha sido aplicado a la clasificación de registros de ensayos clínicos, permitiendo distinguir automáticamente los estudios centrados en nanodrogas y nanodispositivos de aquellos enfocados a testear productos farmacéuticos tradicionales. El presente trabajo pretende proporcionar los métodos necesarios para organizar, depurar, filtrar y validar parte de los datos nanomédicos existentes en la actualidad a una escala adecuada para la toma de decisiones. Análisis similares para otras tareas de investigación en nanomedicina ayudarían a detectar qué recursos nanoinformáticos se requieren para cumplir los objetivos actuales en el área, así como a generar conjunto de datos de referencia, estructurados y densos en información, a partir de literatura y otros fuentes no estructuradas para poder aplicar nuevos algoritmos e inferir nueva información de valor para la investigación en nanomedicina. ABSTRACT Nanotechnology is a research area of recent development that deals with the manipulation and control of matter with dimensions ranging from 1 to 100 nanometers. At the nanoscale, materials exhibit singular physical, chemical and biological phenomena, very different from those manifested at the conventional scale. In medicine, nanosized compounds and nanostructured materials offer improved drug targeting and efficacy with respect to traditional formulations, and reveal novel diagnostic and therapeutic properties. Nevertheless, the complexity of information at the nano level is much higher than the complexity at the conventional biological levels (from populations to the cell). Thus, any nanomedical research workflow inherently demands advanced information management. Unfortunately, Biomedical Informatics (BMI) has not yet provided the necessary framework to deal with such information challenges, nor adapted its methods and tools to the new research field. In this context, the novel area of nanoinformatics aims to build new bridges between medicine, nanotechnology and informatics, allowing the application of computational methods to solve informational issues at the wide intersection between biomedicine and nanotechnology. The above observations determine the context of this doctoral dissertation, which is focused on analyzing the nanomedical domain in-depth, and developing nanoinformatics strategies and tools to map across disciplines, data sources, computational resources, and information extraction and text mining techniques, for leveraging available nanomedical data. The author analyzes, through real-life case studies, some research tasks in nanomedicine that would require or could benefit from the use of nanoinformatics methods and tools, illustrating present drawbacks and limitations of BMI approaches to deal with data belonging to the nanomedical domain. Three different scenarios, comparing both the biomedical and nanomedical contexts, are discussed as examples of activities that researchers would perform while conducting their research: i) searching over the Web for data sources and computational resources supporting their research; ii) searching the literature for experimental results and publications related to their research, and iii) searching clinical trial registries for clinical results related to their research. The development of these activities will depend on the use of informatics tools and services, such as web browsers, databases of citations and abstracts indexing the biomedical literature, and web-based clinical trial registries, respectively. For each scenario, this document provides a detailed analysis of the potential information barriers that could hamper the successful development of the different research tasks in both fields (biomedicine and nanomedicine), emphasizing the existing challenges for nanomedical research —where the major barriers have been found. The author illustrates how the application of BMI methodologies to these scenarios can be proven successful in the biomedical domain, whilst these methodologies present severe limitations when applied to the nanomedical context. To address such limitations, the author proposes an original nanoinformatics approach specifically designed to deal with the special characteristics of information at the nano level. This approach consists of an in-depth analysis of the scientific literature and available clinical trial registries to extract relevant information about experiments and results in nanomedicine —textual patterns, common vocabulary, experiment descriptors, characterization parameters, etc.—, followed by the development of mechanisms to automatically structure and analyze this information. This analysis resulted in the generation of a gold standard —a manually annotated training or reference set—, which was applied to the automatic classification of clinical trial summaries, distinguishing studies focused on nanodrugs and nanodevices from those aimed at testing traditional pharmaceuticals. The present work aims to provide the necessary methods for organizing, curating and validating existing nanomedical data on a scale suitable for decision-making. Similar analysis for different nanomedical research tasks would help to detect which nanoinformatics resources are required to meet current goals in the field, as well as to generate densely populated and machine-interpretable reference datasets from the literature and other unstructured sources for further testing novel algorithms and inferring new valuable information for nanomedicine.
Resumo:
There is controversy regarding the use of the similarity functions proposed in the literature to compare generalized trapezoidal fuzzy numbers since conflicting similarity values are sometimes output for the same pair of fuzzy numbers. In this paper we propose a similarity function aimed at establishing a consensus. It accounts for the different approaches of all the similarity functions. It also has better properties and can easily incorporate new parameters for future improvements. The analysis is carried out on the basis of a large and representative set of pairs of trapezoidal fuzzy numbers.
Resumo:
A major challenge in the engineering of complex and critical systems is the management of change, both in the system and in its operational environment. Due to the growing of complexity in systems, new approaches on autonomy must be able to detect critical changes and avoid their progress towards undesirable states. We are searching for methods to build systems that can tune the adaptability protocols. New mechanisms that use system-wellness requirements to reduce the influence of the outer domain and transfer the control of uncertainly to the inner one. Under the view of cognitive systems, biological emotions suggests a strategy to configure value-based systems to use semantic self-representations of the state. A method inspired by emotion theories to causally connect to the inner domain of the system and its objectives of wellness, focusing on dynamically adapting the system to avoid the progress of critical states. This method shall endow the system with a transversal mechanism to monitor its inner processes, detecting critical states and managing its adaptivity in order to maintain the wellness goals. The paper describes the current vision produced by this work-in-progress.
Resumo:
Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.
Resumo:
A major challenge in the engineering of complex and critical systems is the management of change, both in the system and in its operational environment. Due to the growing of complexity in systems, new approaches on autonomy must be able to detect critical changes and avoid their progress towards undesirable states. We are searching for methods to build systems that can tune the adaptability protocols. New mechanisms that use system-wellness requirements to reduce the influence of the outer domain and transfer the control of uncertainly to the inner one. Under the view of cognitive systems, biological emotions suggests a strategy to configure value-based systems to use semantic self-representations of the state. A method inspired by emotion theories to causally connect to the inner domain of the system and its objectives of wellness, focusing on dynamically adapting the system to avoid the progress of critical states. This method shall endow the system with a transversal mechanism to monitor its inner processes, detecting critical states and managing its adaptivity in order to maintain the wellness goals. The paper describes the current vision produced by this work-in-progress.
Resumo:
The need of an urban transport strategy on urban areas which solves the environmental problems derived from traffic without decreasing the trip attraction of these urban areas is taken for granted. Besides there is also a clear consensus among researchers and institutions in the need for integrated transport strategies (May et al., 2006; Zhang et al., 2006). But there is still a lack of knowledge on the policy measures to be implemented. This research aims to deepen in the understanding of how do different measures interact when implemented together: synergies and complementarities between them. The methodological approach to achieve this objective has been the double analysis ? quantitative and comprehensive ? of the different impacts produced, first of all by each of the measures by itself, and later on combining these measures. For this analysis, we have first defined the objectives to achieve within the transport strategy ? emissions and noise decrease without losing trip attraction - , and then selecting the measures to test the effects these objectives generate. This selection has been based on a literature review, searching for measures with have proven to be successful in achieving at least one of the objectives. The different policies and policy combinations have been tested in a multimodal transport model, considering the city of Madrid as case study. The final aim of the research is to find a transport strategy which produces positive impact in all the objectives established, this is a win-win strategy.
Resumo:
Background: This project’s idea arose derived of the need of the professors of the department “Computer Languages and Systems and Software Engineering (DLSIIS)” to develop exams with multiple choice questions in a more productive and comfortable way than the one they are currently using. The goal of this project is to develop an application that can be easily used by the professors of the DLSIIS when they need to create a new exam. The main problems of the previous creation process were the difficulty in searching for a question that meets some specific conditions in the previous exam files; and the difficulty for editing exams because of the format of the employed text files. Result: The results shown in this document allow the reader to understand how the final application works and how it addresses successfully every customer need. The elements that will help the reader to understand the application are the structure of the application, the design of the different components, diagrams that show the workflow of the application and some selected fragments of code. Conclusions: The goals stated in the application requirements are finally met. In addition, there are some thoughts about the work performed during the development of the application and how it improved the author skills in web development.
Resumo:
Searching for nervous system candidates that could directly induce T cell cytokine secretion, I tested four neuropeptides (NPs): somatostatin, calcitonin gene-related peptide, neuropeptide Y, and substance P. Comparing neuropeptide-driven versus classical antigen-driven cytokine secretion from T helper cells Th0, Th1, and Th2 autoimmune-related T cell populations, I show that the tested NPs, in the absence of any additional factors, directly induce a marked secretion of cytokines [interleukin 2 (IL-2), interferon-γ, IL-4, and IL-10) from T cells. Furthermore, NPs drive distinct Th1 and Th2 populations to a “forbidden” cytokine secretion: secretion of Th2 cytokines from a Th1 T cell line and vice versa. Such a phenomenon cannot be induced by classical antigenic stimulation. My study suggests that the nervous system, through NPs interacting with their specific T cell-expressed receptors, can lead to the secretion of both typical and atypical cytokines, to the breakdown of the commitment to a distinct Th phenotype, and a potentially altered function and destiny of T cells in vivo.
Resumo:
Normal human luminal and myoepithelial breast cells separately purified from a set of 10 reduction mammoplasties by using a double antibody magnetic affinity cell sorting and Dynabead immunomagnetic technique were used in two-dimensional gel proteome studies. A total of 43,302 proteins were detected across the 20 samples, and a master image for each cell type comprising a total of 1,738 unique proteins was derived. Differential analysis identified 170 proteins that were elevated 2-fold or more between the two breast cell types, and 51 of these were annotated by tandem mass spectrometry. Muscle-specific enzyme isoforms and contractile intermediate filaments including tropomyosin and smooth muscle (SM22) alpha protein were detected in the myoepithelial cells, and a large number of cytokeratin subclasses and isoforms characteristic of luminal cells were detected in this cell type. A further 134 nondifferentially regulated proteins were also annotated from the two breast cell types, making this the most extensive study to date of the protein expression map of the normal human breast and the basis for future studies of purified breast cancer cells.
Resumo:
Primitive subsets of leukemic cells isolated by using fluorescence-activated cell sorting from patients with newly diagnosed Ph+/BCR–ABL+ chronic myeloid leukemia display an abnormal ability to proliferate in vitro in the absence of added growth factors. We now show from analyses of growth-factor gene expression, protein production, and antibody inhibition studies that this deregulated growth can be explained, at least in part, by a novel differentiation-controlled autocrine mechanism. This mechanism involves the consistent and selective activation of IL-3 and granulocyte colony-stimulating factor (G-CSF) production and a stimulation of STAT5 phosphorylation in CD34+ leukemic cells. When these cells differentiate into CD34− cells in vivo, IL-3 and G-CSF production declines, and the cells concomitantly lose their capacity for autonomous growth in vitro despite their continued expression of BCR–ABL. Based on previous studies of normal cells, excessive exposure of the most primitive chronic myeloid leukemia cells to IL-3 and G-CSF through an autocrine mechanism could explain their paradoxically decreased self-renewal in vitro and slow accumulation in vivo, in spite of an increased cycling activity and selective expansion of later compartments.
Resumo:
Drosophila Mad proteins are intracellular signal transducers of decapentaplegic (dpp), the Drosophila transforming growth factor β (TGF-β)/bone morphogenic protein (BMP) homolog. Studies in which the mammalian Smad homologs were transiently overexpressed in cultured cells have implicated Smad2 in TGF-β signaling, but the physiological relevance of the Smad3 protein in signaling by TGF-β receptors has not been established. Here we stably expressed Smad proteins at controlled levels in epithelial cells using a novel approach that combines highly efficient retroviral gene transfer and quantitative cell sorting. We show that upon TGF-β treatment Smad3 becomes rapidly phosphorylated at the SSVS motif at its very C terminus. Either attachment of an epitope tag to the C terminus or replacement of these three serine residues with alanine abolishes TGF-β-induced Smad3 phosphorylation; these proteins act in a dominant-negative fashion to block the antiproliferative effect of TGF-β in mink lung epithelial cells. A Smad3 protein in which the three C-terminal serines have been replaced by aspartic acids is also a dominant inhibitor of TGF-β signaling, but can activate plasminogen activator inhibitor 1 (PAI-1) transcription in a ligand-independent fashion when its nuclear localization is forced by transient overexpression. Phosphorylation of the three C-terminal serine residues of Smad3 by an activated TGF-β receptor complex is an essential step in signal transduction by TGF-β for both inhibition of cell proliferation and activation of the PAI-1 promoter.