598 resultados para Kannan Mappings
Resumo:
We prove analogs of classical almost sure dimension theorems for Euclidean projection mappings in the first Heisenberg group, equipped with a sub-Riemannian metric.
Resumo:
We study Hausdorff and Minkowski dimension distortion for images of generic affine subspaces of Euclidean space under Sobolev and quasiconformal maps. For a supercritical Sobolev map f defined on a domain in RnRn, we estimate from above the Hausdorff dimension of the set of affine subspaces parallel to a fixed m-dimensional linear subspace, whose image under f has positive HαHα measure for some fixed α>mα>m. As a consequence, we obtain new dimension distortion and absolute continuity statements valid for almost every affine subspace. Our results hold for mappings taking values in arbitrary metric spaces, yet are new even for quasiconformal maps of the plane. We illustrate our results with numerous examples.
Resumo:
Implicit task sequence learning (TSL) can be considered as an extension of implicit sequence learning which is typically tested with the classical serial reaction time task (SRTT). By design, in the SRTT there is a correlation between the sequence of stimuli to which participants must attend and the sequence of motor movements/key presses with which participants must respond. The TSL paradigm allows to disentangle this correlation and to separately manipulate the presences/absence of a sequence of tasks, a sequence of responses, and even other streams of information such as stimulus locations or stimulus-response mappings. Here I review the state of TSL research which seems to point at the critical role of the presence of correlated streams of information in implicit sequence learning. On a more general level, I propose that beyond correlated streams of information, a simple statistical learning mechanism may also be involved in implicit sequence learning, and that the relative contribution of these two explanations differ according to task requirements. With this differentiation, conflicting results can be integrated into a coherent framework.
Resumo:
The tumor suppressor p16 is a negative regulator of the cell cycle, and acts by preventing the phosphorylation of RB, which in turn prevents the progression from G1 to S phase of the cell cycle. In addition to its role in the cell cycle, p16 may also be able to induce apoptosis in some tumors. Ewing's sarcoma, a pediatric cancer of the bone and soft tissue, was used to study the ability of p16 to induce apoptosis due to the fact that p16 is often deleted in Ewing's sarcoma tumors and may play a role in the oncogenesis or progression of this disease. The purpose of these studies was to determine whether introduction of p16 into Ewing's sarcoma cells would induce apoptosis. We infected the Ewing's sarcoma cell line TC71, which does not express p16, with adenovirus- p16 (Ad-p16). Ad-p16 infection led to the production of functional p16 as measured by the induction of G1 arrest. Ad-p16 infection induced as much as a 100% increase in G1 arrest compared to untreated cells. As measured by propidium iodide (PI) and Annexin V staining, Ad-p16 was able to induce apoptosis to levels 20–30 fold higher than controls. Furthermore, Ad-p16 infection led to loss of RB protein before apoptosis could be detected. The loss of RB protein was due to post-translational degradation of RB, which was inhibited by the addition of the proteasome inhibitors PS-341 and NPI-0052. Downregulation of RB with si-RNA sensitized cells to Ad-p16-induced apoptosis, indicating that RB protects from apoptosis in this model. This study shows that p16 leads to the degradation of RB by the ubiquitin/proteasome pathway, and that this degradation may be important for the induction of apoptosis. Given that RB may protect from apoptosis in some tumors, apoptosis-inducing therapies may be enhanced in tumors which have lost RB expression, or in which RB is artificially inactivated. ^
Resumo:
Este trabajo reconstruye los diferentes mapas de la ciudad que se van trazando a partir de los itinerarios, las localizaciones y los vínculos de los personajes en algunas novelas argentinas aparecidas en la década de 1880, con el objeto de demostrar que esta cartografía urbana es el emergente de una estructura de sentimiento legitimadora de la hegemonía de una clase y que se conecta, a su vez, con la necesidad de reducir y simplificar las relaciones sociales cada vez más problemáticas de la ciudad moderna.
Resumo:
Este trabajo reconstruye los diferentes mapas de la ciudad que se van trazando a partir de los itinerarios, las localizaciones y los vínculos de los personajes en algunas novelas argentinas aparecidas en la década de 1880, con el objeto de demostrar que esta cartografía urbana es el emergente de una estructura de sentimiento legitimadora de la hegemonía de una clase y que se conecta, a su vez, con la necesidad de reducir y simplificar las relaciones sociales cada vez más problemáticas de la ciudad moderna.
Resumo:
Este trabajo reconstruye los diferentes mapas de la ciudad que se van trazando a partir de los itinerarios, las localizaciones y los vínculos de los personajes en algunas novelas argentinas aparecidas en la década de 1880, con el objeto de demostrar que esta cartografía urbana es el emergente de una estructura de sentimiento legitimadora de la hegemonía de una clase y que se conecta, a su vez, con la necesidad de reducir y simplificar las relaciones sociales cada vez más problemáticas de la ciudad moderna.
Resumo:
A long-running interdisciplinary research project on the development of landscape, prehistoric habitation and the history of vegetation within a "siedlungskammer" (limited habitation areal from neolithic to modern times has been carried out in the NW German lowlands, The siedlungskammer Flögeln is situated between the rivers Weser and EIbe and comprises about 23.5 km^2. It is an isolated pleistocene area surrounded by bogs, the soils consisting mainly of poor sands. In this siedlungskammer large-seale archaeological excavations and mappings have been performed, parallel to pedological, historical and above all pollen analytical investigations. The aim of the project is to record the individual phases in time, to delimit the respective settlement areas and to reconstruct the conditions of life and economy for each time period. A dense network of 10 pollen diagrams has been constructed. Several of them derive from the marginal area and from the centre of the large raised bog north of the siedlungskammer. These diagrams reflect the history of vegetation and habitation of a large region; due to the large pollen source area the habitation phases in the diagrams are poorly defined. Even in the utmost marginal diagram of this woodless bog, a great village with adjoining fields, situated only 100 m away from it, is registered with only low values of anthropogenic indicators. In contrast to this, the numerous pollen diagrams from kettle-hole bogs inside the siedlungskammer yield an exact picture of the habitation of the siedlungskammer and their individual parts. Early traces of habitation can be identified in the pollen diagram soon after the elm decline (around 5190 BP). Some time later in the middle neolithic period there follows a marked habitation phase, which starts between 4500 and 4400 BP and reflects the immigration of the trichterbecher culture. It corresponds to the landnam phase of Iversen in Denmark and begins with a sharp decline of the pollen curves of lime and oak, followed by the increase of anthropogenic indicators pointing to arable and pastural farming. High values of wild grasses and Calluna witness extensive forest grazing. This middle to late neolithic habitation is also registered archaeologically by settlements and numerous graves. After low human activity during Bronze Age and Older Iron Age times the archaeological and pollen analytical records of Roman and Migration periods is again very strong. This is followed by a gap in habitation during the 6th and 7th centuries and afterwards in the western part of the siedlungskammer from about 700 AD until the 14th century by the activity of the medieval village of Dalem, that was also excavated and whose fields were recorded by phosphate mapping to a size of 117 hectares. This medieval settlement phase is marked by much cereal cultivation (mainly rye). The dense network of pollen diagrams offers an opportunity to register the dispersion of the anthropogenic indicators from the areas of settlement to different distances and thus to obtain quantitative clues for the assessment of these anthropogenic indicators in pollen diagrams. In fig. 4 the reflection of the neolithic culture in the kettle-hole bogs and the large raised bog is shown in 3 phases: a) pre landnam, b) TRB-landnam, c) post landnam. Among arboreal pollen the reaction of Quercus is sharp close to the settlement but is not found at more distant profiles, whilst in contrast to this Tilia shows a significant decline even far away from the settlements. The record of most anthropogenic indicators outside the habitation area is very low, in particular cereal pollen is poorly dispersed; much more certain as an indicator for habitation (also for arable farming!) is Plantago lanceolata. A strong increase of wild grasses (partly Calluna aswell) some distance from the habitation areas indicates far reaching forest grazing. Fig. 5 illustrates the reflection of the anthropogenie indicators from the medieval village Dalem. In this instance the field area could be mapped exactly using phosphate investigations, and it has been possible to indicate the precise distances of the profile sites from the medieval fields. Here also, there is a clear correlation between decreasing anthropogenic indicators and increasing distance. In a kettle-hole bog (FLH) a distance of 3000 m away this marked settlement phase is not registered. The contrast between the pollen diagrams SWK and FLH (fig. 2 + 3, enclosure), illustrates the strong differences between diagrams from kettlehole bogs close to and distant from the settlements, for the neolithic as well as for the medieval period. On the basis of the examples presented here, implications concerning the interpretation of pollen diagrams with respect to habitation phases are discussed.
Resumo:
Recently, the Semantic Web has experienced significant advancements in standards and techniques, as well as in the amount of semantic information available online. Nevertheless, mechanisms are still needed to automatically reconcile information when it is expressed in different natural languages on the Web of Data, in order to improve the access to semantic information across language barriers. In this context several challenges arise [1], such as: (i) ontology translation/localization, (ii) cross-lingual ontology mappings, (iii) representation of multilingual lexical information, and (iv) cross-lingual access and querying of linked data. In the following we will focus on the second challenge, which is the necessity of establishing, representing and storing cross-lingual links among semantic information on the Web. In fact, in a “truly” multilingual Semantic Web, semantic data with lexical representations in one natural language would be mapped to equivalent or related information in other languages, thus making navigation across multilingual information possible for software agents.
Resumo:
The Web has witnessed an enormous growth in the amount of semantic information published in recent years. This growth has been stimulated to a large extent by the emergence of Linked Data. Although this brings us a big step closer to the vision of a Semantic Web, it also raises new issues such as the need for dealing with information expressed in different natural languages. Indeed, although the Web of Data can contain any kind of information in any language, it still lacks explicit mechanisms to automatically reconcile such information when it is expressed in different languages. This leads to situations in which data expressed in a certain language is not easily accessible to speakers of other languages. The Web of Data shows the potential for being extended to a truly multilingual web as vocabularies and data can be published in a language-independent fashion, while associated language-dependent (linguistic) information supporting the access across languages can be stored separately. In this sense, the multilingual Web of Data can be realized in our view as a layer of services and resources on top of the existing Linked Data infrastructure adding i) linguistic information for data and vocabularies in different languages, ii) mappings between data with labels in different languages, and iii) services to dynamically access and traverse Linked Data across different languages. In this article we present this vision of a multilingual Web of Data. We discuss challenges that need to be addressed to make this vision come true and discuss the role that techniques such as ontology localization, ontology mapping, and cross-lingual ontology-based information access and presentation will play in achieving this. Further, we propose an initial architecture and describe a roadmap that can provide a basis for the implementation of this vision.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.
Resumo:
Cognitive linguistics have conscientiously pointed out the pervasiveness of conceptual mappings, particularly as conceptual blending and integration, that underlie language and that are unconsciously used in everyday speech (Fauconnier 1997, Fauconnier & Turner 2002; Rohrer 2007; Grady, Oakley & Coulson 1999). Moreover, as a further development of this work, there is a growing interest in research devoted to the conceptual mappings that make up specialized technical disciplines. Lakoff & Núñez 2000, for example, have produced a major breakthrough on the understanding of concepts in mathematics, through conceptual metaphor and as a result not of purely abstract concepts but rather of embodiment. On the engineering and architecture front, analyses on the use of metaphor, blending and categorization in English and Spanish have likewise appeared in recent times (Úbeda 2001, Roldán 1999, Caballero 2003a, 2003b, Roldán & Ubeda 2006, Roldán & Protasenia 2007). The present paper seeks to show a number of significant conceptual mappings underlying the language of architecture and civil engineering that seem to shape the way engineers and architects communicate. In order to work with a significant segment of linguistic expressions in this field, a corpus taken from a widely used technical Spanish engineering journal article was collected and analysed. The examination of the data obtained indicates that many tokens make a direct reference to therapeutic conceptual mappings, highlighting medical domains such as diagnosing,treating and curing. Hence, the paper illustrates how this notion is instantiated by the corresponding bodily conceptual integration. In addition, we wish to underline the function of visual metaphors in the world of modern architecture by evoking parts of human or animal anatomy, and how this is visibly noticeable in contemporary buildings and public works structures.
Resumo:
An important objective of the INTEGRATE project1 is to build tools that support the efficient execution of post-genomic multi-centric clinical trials in breast cancer, which includes the automatic assessment of the eligibility of patients for available trials. The population suited to be enrolled in a trial is described by a set of free-text eligibility criteria that are both syntactically and semantically complex. At the same time, the assessment of the eligibility of a patient for a trial requires the (machineprocessable) understanding of the semantics of the eligibility criteria in order to further evaluate if the patient data available for example in the hospital EHR satisfies these criteria. This paper presents an analysis of the semantics of the clinical trial eligibility criteria based on relevant medical ontologies in the clinical research domain: SNOMED-CT, LOINC, MedDRA. We detect subsets of these widely-adopted ontologies that characterize the semantics of the eligibility criteria of trials in various clinical domains and compare these sets. Next, we evaluate the occurrence frequency of the concepts in the concrete case of breast cancer (which is our first application domain) in order to provide meaningful priorities for the task of binding/mapping these ontology concepts to the actual patient data. We further assess the effort required to extend our approach to new domains in terms of additional semantic mappings that need to be developed.
Resumo:
The use of semantic and Linked Data technologies for Enterprise Application Integration (EAI) is increasing in recent years. Linked Data and Semantic Web technologies such as the Resource Description Framework (RDF) data model provide several key advantages over the current de-facto Web Service and XML based integration approaches. The flexibility provided by representing the data in a more versatile RDF model using ontologies enables avoiding complex schema transformations and makes data more accessible using Web standards, preventing the formation of data silos. These three benefits represent an edge for Linked Data-based EAI. However, work still has to be performed so that these technologies can cope with the particularities of the EAI scenarios in different terms, such as data control, ownership, consistency, or accuracy. The first part of the paper provides an introduction to Enterprise Application Integration using Linked Data and the requirements imposed by EAI to Linked Data technologies focusing on one of the problems that arise in this scenario, the coreference problem, and presents a coreference service that supports the use of Linked Data in EAI systems. The proposed solution introduces the use of a context that aggregates a set of related identities and mappings from the identities to different resources that reside in distinct applications and provide different views or aspects of the same entity. A detailed architecture of the Coreference Service is presented explaining how it can be used to manage the contexts, identities, resources, and applications which they relate to. The paper shows how the proposed service can be utilized in an EAI scenario using an example involving a dashboard that integrates data from different systems and the proposed workflow for registering and resolving identities. As most enterprise applications are driven by business processes and involve legacy data, the proposed approach can be easily incorporated into enterprise applications.