887 resultados para LINK-BASED AND MULTIDIMENSIONAL QUERY LANGUAGE (LMDQL)
Resumo:
The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.
Resumo:
La ricerca inquadra all’interno dell’opera dell’autore, lo specifico tema della residenza. Esso costituisce il campo di applicazione del progetto di architettura, in cui più efficacemente ricercare i tratti caratteristici del metodo progettuale dell’architetto, chiave di lettura dello studio proposto. Il processo che giunge alla costituzione materiale dell’architettura, viene considerato nelle fasi in cui è scomposto, negli strumenti che adotta, negli obbiettivi che si pone, nel rapporto con i sistemi produttivi, per come affronta il tema della forma e del programma e confrontato con la vasta letteratura presente nel pensiero di alcuni autori vicini a Ignazio Gardella. Si definiscono in tal modo i tratti di una metodologia fortemente connotata dal realismo, che rende coerente una ricerca empirica e razionale, legata ad un’idea di architettura classica, di matrice illuministica e attenta alle istanze della modernità, all’interno della quale si realizza l’eteronomia linguistica che caratterizza uno dei tratti caratteristici delle architetture di Ignazio Gardella; aspetto più volte interpretato come appartenenza ai movimenti del novecento, che intersecano costantemente la lunga carriera dell’architetto. L’analisi dell’opera della residenza è condotta non per casi esemplari, ma sulla totalità dei progetti che si avvale anche di contributi inediti. Essa è intesa come percorso di ricerca personale sui processi compositivi e sull’uso del linguaggio e permette un riposizionamento della figura di Gardella, in relazione al farsi dell’architettura, della sua realizzazione e non alla volontà di assecondare stili o norme a-priori. E’ la dimensione pratica, del mestiere, quella che meglio si presta all’interpretazione dei progetti di Gardella. Le residenze dell’architetto si mostrano per la capacità di adattarsi ai vincoli del luogo, del committente, della tecnologia, attraverso la re-interpretazione formale e il trasferimento da un tema all’altro, degli elementi essenziali che mostrano attraverso la loro immagine, una precisa idea di casa e di architettura, non autoriale, ma riconoscibile e a-temporale.
Resumo:
In dieser Dissertation Die Universalität der Hermeneutik wurde die folgenden zwei Fragen behandelt: Erstens: kann die Psychoanalyse - konkreter gesagt, die Übertragungssituation in dem szenischen Verstehen - eine Ausnahme von der Universalität der Hermeneutik darstellen? Zweitens: ist kritische Reflexion überhaupt moglich?, und konnen die Universalität der Hermeneutik und die kritische Reflexion miteinander in Übereinstimmung gebracht werden? Durch das szenische Verstehen erlautert Habermas den Vorgang, wie die Umgangssprache des Patienten von dem Analytiker analysiert wird. Zumal in der Übertragungssituation ist der Gegenstand des Symptoms des Patienten nichts anderes als der Analytiker selbst, und demzufolge kann der Analytiker selbst dann an dem Symptom seines Patienten teilnehmen. Durch diese Teilnahme kann er die Bedeutung des Symptoms seines Patienten genau erfassen. Aber wenn der Analytiker nicht das Unbewusste, das sein Patient ihm offenbart, umgangssprachlich akzeptiert hätte, oder anders gesagt, wenn das Unbewusste des Patienten sich dem Analytiker nicht als "ein Gesagtes" gezeigt hätte, hätte der Analytiker auf keinem Fall daraus etwas erfassen können. Infolgedessen kann die Psychoanalyse nicht das Gegenbeispiel fur die Universalität der Hermeneutik werden. Damit die kritische Reflexion möglich wäre, müssten vor allem unser Bewusstsein und die Sprache voneinander getrennt werden. Deswegen wurde in dieser Arbeit behauptet, dass sie - obwohl es in der Tat selbstverstandlich unmöglich ist - aber sehr wohl rein begrifflich gesehen voneinander getrennt und unterschieden werden können. In diesem Fall kann das Einflussverhältnis zwischen der Sprache und den außersprachlichen Faktoren in das Einflussverhältnis zwischen der "Arbeit und Herrschaft" und der Sprache und unserem Bewusstsein, unterteilt und differenziert werden: Arbeit und Herrschaft übt auf die Sprache Einfluss aus, und die Sprache übt auf unser Bewusstsein Einfluss aus. Und mit der Tatsache, dass die Praxis des Verstehens verändert wird, kann man beweisen, dass unser Bewusstsein auf die Arbeit und Herrschaft Einfluss ausüben kann. Und das bedeutet, dass unser Bewusstsein, obwohl es nur mittelbar ist, auf jeden Fall auch auf die Sprache Einfluss ausüben kann. Infolgedessen, wenn die Universalität der Hermeneutik gültig ist, kann man auch sagen, dass auf dieselbe Weise, die kritische Reflexion Habermas´ möglich ist. Und diese Einflussverhältnisse sind aber in dauernder Zirkulationsbewegung. Und diese Zirkulationsbewegung an sich ist das Wesen des Menschen, und daraus bildet sich seine Geschichte.
Resumo:
Die zwischen allen Objekten vorhandenen Wechselwirkungen können repulsiver und attraktiver Natur sein. Bei den attraktiven Kräften kommt der Bestimmung von Dispersionskräften eine besondere Bedeutung zu, da sie in allen kolloidalen Systemen vorhanden sind und entscheidenden Einfluss auf die Eigenschaften und Prozesse dieser Systeme nehmen. Eine der Möglichkeiten, Theorie und Experiment zu verbinden, ist die Beschreibung der London-Van der Waals-Wechselwirkung durch die Hamaker-Konstante, welche durch Berechnungen der Wechselwirkungsenergie zwischen Objekten erhalten werden kann. Für die Beschreibung von Oberflächenphänomenen wie Adhäsion, die in Termen der totalen potentiellen Energie zwischen Partikeln und Substrat beschrieben werden, benötigt man exakt bestimmte Hamaker-Konstanten. In der vorliegenden Arbeit wurde die asymmetrische Fluss Feld-Fluss Fraktionierung in Kombination mit einem auf dem Newton-Algorithmus basierenden Iterationsverfahren zur Bestimmung der effektiven Hamaker-Konstanten verschiedener Nanopartikeln sowie Polystyrollatex-Partikel in Toluol bzw. Wasser verwendet. Der Einfluss verschiedener Systemparameter und Partikeleigenschaften wurde im Rahmen der klassischen DLVO-Theorie untersucht.
Resumo:
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
Resumo:
Food suppliers currently measure apple quality considering basic pomological descriptors. Sensory analysis is expensive, does not permit to analyse many samples, and cannot be implemented for measuring quality properties in real time. However, sensory analysis is the best way to precisely describe food eating quality, since it is able to define, measure, and explain what is really perceivable by human senses and using a language that closely reflects the consumers’ perception. On the basis of such observations, we developed a detailed protocol for apple sensory profiling by descriptive sensory analysis and instrumental measurements. The collected sensory data were validated by applying rigorous scientific criteria for sensory analysis. The method was then applied for studying sensory properties of apples and their changes in relation to different pre- and post-harvest factors affecting fruit quality, and demonstrated to be able to discriminate fruit varieties and to highlight differences in terms of sensory properties. The instrumental measurements confirmed such results. Moreover, the correlation between sensory and instrumental data was studied, and a new effective approach was defined for the reliable prediction of sensory properties by instrumental characterisation. It is therefore possible to propose the application of this sensory-instrumental tool to all the stakeholders involved in apple production and marketing, to have a reliable description of apple fruit quality.
Resumo:
Theatralität ist ein gängiges Konzept, um Theater in Afrika zu definieren. Wird dieses Konzept angewendet, so treten die historischen Unterschiede zwischen den verschiedenen Theaterformen in den Hintergrund. Deshalb ist es wichtig, Theater in einen kulturellen Kontext zu stellen, aus dem das Theater entsteht. Dadurch können nationale und internationale Suprastrukturen, die die sozialpolitische und wirtschaftliche Atmosphäre bestimmen, analysiert werden,. Da sich die aktuelle „globale“ Entwicklung auf neoliberale Grundsätze stützt, ist es offensichtlich, dass man Theater nicht diskutieren kann, ohne näher auf Neoliberalismus, Imperialismus, Kapitalismus, Entwicklungshilfe und Geberpolitik einzugehen.rnDerzeit werden die meisten Theaterprojekte in Tansania durch die Entwicklungshilfe oder ausländische Geberorganisationen unterstützt. Diese Organisationen stellen finanzielle Mittel zur Verfügung, um Theaterproduktionen auf unterschiedlichem Niveau zu ermöglichen. Diese Spendenpraxis hat zu der Fehlannahme geführt, dass Theater nur dann ein Theater ist, wenn es durch ausländische Organisationen finanziert wird. Jedoch ist es offensichtlich, dass diese finanziellen Mittel eine große Rolle in der Machtpolitik spielen. Diese Studie untersucht deshalb die Frage: Welchen Einfluss hat die neoliberale Politik, insbesondere durch die Entwicklungshilfe, auf das Theater in Tansania? Die Arbeit deckt einmal die Verbindung zwischen dem produzieren Theater und den verschiedenen dominierenden politischen Richtungen – von Nationalismus bis Neoliberalismus – auf. Darüber hinaus wird gezeigt, dass diese Verbindungen es dem Theater erschweren, diese Suprastrukturen zu vermeiden, durch die es finanziert wird. Das bedeutet, dass die neoliberale Politik mit seinen Merkmalen von Einengung, Unterdrückung und Ausbeutung auch ein eingeengtes, unterdrücktes und ausbeuterisches Theater hervorbringt. Dieser Studie bezeichnet ein solches Theater als Theater (Neo-)Liberalismus. Es ist ein Theater, das apolitisch auftritt, aber tatsächlich unter der neoliberal Politik des freien Markts und der Subventionsstreichungen ums Überleben kämpft.rnIndem diese Verbindungen zwischen Theater, Entwicklungshilfe und Geberorganisationen erläutert werden, kommt diese Forschung zu folgendem Ergebnis: Die Geberorganisationen haben kein Recht, unabhängig von der Höhe ihrer Spende, in die Souveränität eines Staates einzugreifen oder ein neues System einzuführen. Deshalb sollte die Loslösung von ausländischen Geberländern an erster Stelle stehen, damit sich das Theater ganz entwickeln und unabhängig überleben kann. Es ist deshalb notwendig, das Konzept des Volkstheaters neu zu definieren. Das Theater soll wieder mit den Initiativen von Menschen zu tun haben und ihre eigenen Themen in einem gewissen zeitlich und räumlichen Rahmen ansprechen.rnrn
Resumo:
In any terminological study, candidate term extraction is a very time-consuming task. Corpus analysis tools have automatized some processes allowing the detection of relevant data within the texts, facilitating term candidate selection as well. Nevertheless, these tools are (normally) not specific for terminology research; therefore, the units which are automatically extracted need manual evaluation. Over the last few years some software products have been specifically developed for automatic term extraction. They are based on corpus analysis, but use linguistic and statistical information to filter data more precisely. As a result, the time needed for manual evaluation is reduced. In this framework, we tried to understand if and how these new tools can really be an advantage. In order to develop our project, we simulated a terminology study: we chose a domain (i.e. legal framework for medicinal products for human use) and compiled a corpus from which we extracted terms and phraseologisms using AntConc, a corpus analysis tool. Afterwards, we compared our list with the lists extracted automatically from three different tools (TermoStat Web, TaaS e Sketch Engine) in order to evaluate their performance. In the first chapter we describe some principles relating to terminology and phraseology in language for special purposes and show the advantages offered by corpus linguistics. In the second chapter we illustrate some of the main concepts of the domain selected, as well as some of the main features of legal texts. In the third chapter we describe automatic term extraction and the main criteria to evaluate it; moreover, we introduce the term-extraction tools used for this project. In the fourth chapter we describe our research method and, in the fifth chapter, we show our results and draw some preliminary conclusions on the performance and usefulness of term-extraction tools.
Resumo:
PURPOSE: Assessment of language dominance with functional magnetic resonance imaging (fMRI) and neuropsychological evaluation is often used prior to epilepsy surgery. This study explores whether language lateralization and cognitive performance are systematically related in young patients with focal epilepsy. METHODS: Language fMRI and neuropsychological data (language, visuospatial functions, and memory) of 40 patients (7-18 years of age) with unilateral, refractory focal epilepsy in temporal and/or frontal areas of the left (n = 23) or right hemisphere (n = 17) were analyzed. fMRI data of 18 healthy controls (7-18 years) served as a normative sample. A laterality index was computed to determine the lateralization of activation in three regions of interest (frontal, parietal, and temporal). RESULTS: Atypical language lateralization was demonstrated in 12 (30%) of 40 patients. A correlation between language lateralization and verbal memory performance occurred in patients with left-sided epilepsy over all three regions of interest, with bilateral or right-sided language lateralization being correlated with better verbal memory performance (Word Pairs Recall: frontal r = -0.4, p = 0.016; parietal r = -0.4, p = 0.043; temporal r = -0.4, p = 0.041). Verbal memory performance made the largest contribution to language lateralization, whereas handedness and side of seizures did not contribute to the variance in language lateralization. DISCUSSION: This finding reflects the association between neocortical language and hippocampal memory regions in patients with left-sided epilepsy. Atypical language lateralization is advantageous for verbal memory performance, presumably a result of transfer of verbal memory function. In children with focal epilepsy, verbal memory performance provides a better idea of language lateralization than handedness and side of epilepsy and lesion.
Resumo:
n this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.
Resumo:
Patient's language, tradition, conventions, and customs may all determine integration into a society and are also part of the doctor-patient relationship that influences diagnostic and therapeutic outcome. Language barrier and sociocultural disparity of Eastern and Southern European patients may hamper recovery from pain and depression compared to Middle European patients in Switzerland.
Resumo:
Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations.
Resumo:
Toll-like receptors (TLR) recognize a variety of ligands, including pathogen-associated molecular patterns and link innate and adaptive immunity. Individual receptors can be up-regulated during infection and inflammation. We examined the expression of selected TLRs at the protein level in various types of renal disease.
Resumo:
Telomeres are protective structures at the ends of eukaryotic chromosomes. The loss of telomeres through cell division and oxidative stress is related to cellular aging, organismal growth and disease. In this way, telomeres link molecular and cellular mechanisms with organismal processes, and may explain variation in a number of important life-history traits. Here, we discuss how telomere biology relates to the study of physiological ecology and life history evolution. We emphasize current knowledge on how telomeres may relate to growth, survival and lifespan in natural populations. We finish by examining interesting new connections between telomeres and the glucocorticoid stress response. Glucocorticoids are often employed as indices of physiological condition, and there is evidence that the glucocorticoid stress response is adaptive. We suggest that one way that glucocorticoids impact organismal survival is through elevated oxidative stress and telomere loss. Future work needs to establish and explore the link between the glucocorticoid stress response and telomere shortening in natural populations. If a link is found, it provides an explanatory mechanism by which environmental perturbation impacts life history trajectories.