953 resultados para Data encryption (Computer science)
Resumo:
Este proyecto consiste en diseñar e implementar un sistema de información alojado en una base de datos Oracle, con el fin de dar respuesta al proyecto Big Data, cuyo objetivo es cruzar los datos de salud y los datos de actividad física de los ciudadanos europeos.
Resumo:
This thesis considers aspects related to the design and standardisation of transmission systems for wireless broadcasting, comprising terrestrial and mobile reception. The purpose is to identify which factors influence the technical decisions and what issues could be better considered in the design process in order to assess different use cases, service scenarios and end-user quality. Further, the necessity of cross-layer optimisation for efficient data transmission is emphasised and means to take this into consideration are suggested. The work is mainly related terrestrial and mobile digital video broadcasting systems but many of the findings can be generalised also to other transmission systems and design processes. The work has led to three main conclusions. First, it is discovered that there are no sufficiently accurate error criteria for measuring the subjective perceived audiovisual quality that could be utilised in transmission system design. Means for designing new error criteria for mobile TV (television) services are suggested and similar work related to other services is recommended. Second, it is suggested that in addition to commercial requirements there should be technical requirements setting the frame work for the design process of a new transmission system. The technical requirements should include the assessed reception conditions, technical quality of service and service functionalities. Reception conditions comprise radio channel models, receiver types and antenna types. Technical quality of service consists of bandwidth, timeliness and reliability. Of these, the thesis focuses on radio channel models and errorcriteria (reliability) as two of the most important design challenges and provides means to optimise transmission parameters based on these. Third, the thesis argues that the most favourable development for wireless broadcasting would be a single system suitable for all scenarios of wireless broadcasting. It is claimed that there are no major technical obstacles to achieve this and that the recently published second generation digital terrestrial television broadcasting system provides a good basis. The challenges and opportunities of a universal wireless broadcasting system are discussed mainly from technical but briefly also from commercial and regulatory aspect
Resumo:
Al TFC es desenvoluparà un magatzem de dades per tal d'explotar unes dades provinents de Twitter sobre un determinat esdeveniment col·lectiu.
Resumo:
The proposed transdisciplinary field of ‘complexics’ would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not “to reduce complexity to simplicity, [but] totranslate complexity into theory”.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the ‘sciences’ and in the ‘arts’. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.
Resumo:
The proposed transdisciplinary field of ‘complexics’ would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not “to reduce complexity to simplicity, [but] totranslate complexity into theory”.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the ‘sciences’ and in the ‘arts’. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.
Resumo:
En els últims anys el sector de la construcció ha experimentat un creixement exponencial. Aquest creixement ha repercutit sobre molts aspectes: des de la necessitat de tenir més personal a les obres, la implantació d’unes oficines per a poder gestionar la compatibilitat i portar un control sobre les obres fins a la necessitat d’haver de disposar de programes informàtics específics que ajudin a realitzar la feina de la manera més còmode i àgil possible. El projecte que s’ha dut a terme consisteix a cobrir una d’aquestes necessitats, que és la de la gestió dels pressupostos en les diferents obres que els constructors realitzen. Utilitza la base de dades de l’ITEC (institut de Tecnologia de la Construcció de Catalunya) sobre la qual treballen la immensa majoria dels arquitectes quan dissenyen les obres, però també permet entrar les pròpies dades que el constructor vulgui. L’usuari de l’aplicació podrà fer pressupostos per obres de nova construcció, reformes ... agrupant cada una d’elles per capítols. Aquests capítols els podem entendre com les diferents fases a dur a terme, per exemple: la construcció dels fonaments, l’aixecament de les parets o fer la teulada. Dins dels capítols hi trobem les partides, que és un conjunt de materials i hores de feina i maquinària per a dur a terme una part de l’obra, com per exemple seria fer un envà de separació entre habitacions. En aquest cas hi tindríem els diferents materials que necessitaríem, totxanes, morter; les hores de manobre necessàries per aixecar-la, el transport de tot el material fins a l’obra... Tots aquests paràmetres (materials, hores, transport...) s’anomenen articles i van inclosos a dins de les partides. Aquesta aplicació està dissenyada per funcionar en un entorn client/servidor, utilitzant com a servidor un Linux OpenSuse 10.2 i com a clients estacions de treball amb Windows XP, tot i que també podríem utilitzar d’altres versions dels sistemes operatius de Microsoft. L’entorn de desenvolupament utilitzat és el del llenguatge FDS , el qual ja porta integrat un gestor de fitxers que és el que es farà servir.
Resumo:
Broadcasting systems are networks where the transmission is received by several terminals. Generally broadcast receivers are passive devices in the network, meaning that they do not interact with the transmitter. Providing a certain Quality of Service (QoS) for the receivers in heterogeneous reception environment with no feedback is not an easy task. Forward error control coding can be used for protection against transmission errors to enhance the QoS for broadcast services. For good performance in terrestrial wireless networks, diversity should be utilized. The diversity is utilized by application of interleaving together with the forward error correction codes. In this dissertation the design and analysis of forward error control and control signalling for providing QoS in wireless broadcasting systems are studied. Control signaling is used in broadcasting networks to give the receiver necessary information on how to connect to the network itself and how to receive the services that are being transmitted. Usually control signalling is considered to be transmitted through a dedicated path in the systems. Therefore, the relationship of the signaling and service data paths should be considered early in the design phase. Modeling and simulations are used in the case studies of this dissertation to study this relationship. This dissertation begins with a survey on the broadcasting environment and mechanisms for providing QoS therein. Then case studies present analysis and design of such mechanisms in real systems. The mechanisms for providing QoS considering signaling and service data paths and their relationship at the DVB-H link layer are analyzed as the first case study. In particular the performance of different service data decoding mechanisms and optimal signaling transmission parameter selection are presented. The second case study investigates the design of signaling and service data paths for the more modern DVB-T2 physical layer. Furthermore, by comparing the performances of the signaling and service data paths by simulations, configuration guidelines for the DVB-T2 physical layer signaling are given. The presented guidelines can prove useful when configuring DVB-T2 transmission networks. Finally, recommendations for the design of data and signalling paths are given based on findings from the case studies. The requirements for the signaling design should be derived from the requirements for the main services. Generally, these requirements for signaling should be more demanding as the signaling is the enabler for service reception.
Resumo:
In this thesis, simple methods have been sought to lower the teacher’s threshold to start to apply constructive alignment in instruction. From the phases of the instructional process, aspects that can be improved with little effort by the teacher have been identified. Teachers have been interviewed in order to find out what students actually learn in computer science courses. A quantitative analysis of the structured interviews showed that in addition to subject specific skills and knowledge, students learn many other skills that should be mentioned in the learning outcomes of the course. The students’ background, such as their prior knowledge, learning style and culture, affects how they learn in a course. A survey was conducted to map the learning styles of computer science students and to see if their cultural background affected their learning style. A statistical analysis of the data indicated that computer science students are different learners than engineering students in general and that there is a connection between the student’s culture and learning style. In this thesis, a simple self-assessment scale that is based on Bloom’s revised taxonomy has been developed. A statistical analysis of the test results indicates that in general the scale is quite reliable, but single students still slightly overestimate or under-estimate their knowledge levels. For students, being able to follow their own progress is motivating, and for a teacher, self-assessment results give information about how the class is proceeding and what the level of the students’ knowledge is.
Resumo:
Tämä tutkielma kuuluu merkkijonoalgoritmiikan piiriin. Merkkijono S on merkkijonojen X[1..m] ja Y[1..n] yhteinen alijono, mikäli se voidaan muodostaa poistamalla X:stä 0..m ja Y:stä 0..n kappaletta merkkejä mielivaltaisista paikoista. Jos yksikään X:n ja Y:n yhteinen alijono ei ole S:ää pidempi, sanotaan, että S on X:n ja Y:n pisin yhteinen alijono (lyh. PYA). Tässä työssä keskitytään kahden merkkijonon PYAn ratkaisemiseen, mutta ongelma on yleistettävissä myös useammalle jonolle. PYA-ongelmalle on sovelluskohteita – paitsi tietojenkäsittelytieteen niin myös bioinformatiikan osa-alueilla. Tunnetuimpia niistä ovat tekstin ja kuvien tiivistäminen, tiedostojen versionhallinta, hahmontunnistus sekä DNA- ja proteiiniketjujen rakennetta vertaileva tutkimus. Ongelman ratkaisemisen tekee hankalaksi ratkaisualgoritmien riippuvuus syötejonojen useista eri parametreista. Näitä ovat syötejonojen pituuden lisäksi mm. syöttöaakkoston koko, syötteiden merkkijakauma, PYAn suhteellinen osuus lyhyemmän syötejonon pituudesta ja täsmäävien merkkiparien lukumäärä. Täten on vaikeaa kehittää algoritmia, joka toimisi tehokkaasti kaikille ongelman esiintymille. Tutkielman on määrä toimia yhtäältä käsikirjana, jossa esitellään ongelman peruskäsitteiden kuvauksen jälkeen jo aikaisemmin kehitettyjä tarkkoja PYAalgoritmeja. Niiden tarkastelu on ryhmitelty algoritmin toimintamallin mukaan joko rivi, korkeuskäyrä tai diagonaali kerrallaan sekä monisuuntaisesti prosessoiviin. Tarkkojen menetelmien lisäksi esitellään PYAn pituuden ylä- tai alarajan laskevia heuristisia menetelmiä, joiden laskemia tuloksia voidaan hyödyntää joko sellaisinaan tai ohjaamaan tarkan algoritmin suoritusta. Tämä osuus perustuu tutkimusryhmämme julkaisemiin artikkeleihin. Niissä käsitellään ensimmäistä kertaa heuristiikoilla tehostettuja tarkkoja menetelmiä. Toisaalta työ sisältää laajahkon empiirisen tutkimusosuuden, jonka tavoitteena on ollut tehostaa olemassa olevien tarkkojen algoritmien ajoaikaa ja muistinkäyttöä. Kyseiseen tavoitteeseen on pyritty ohjelmointiteknisesti esittelemällä algoritmien toimintamallia hyvin tukevia tietorakenteita ja rajoittamalla algoritmien suorittamaa tuloksetonta laskentaa parantamalla niiden kykyä havainnoida suorituksen aikana saavutettuja välituloksia ja hyödyntää niitä. Tutkielman johtopäätöksinä voidaan yleisesti todeta tarkkojen PYA-algoritmien heuristisen esiprosessoinnin lähes systemaattisesti pienentävän niiden suoritusaikaa ja erityisesti muistintarvetta. Lisäksi algoritmin käyttämällä tietorakenteella on ratkaiseva vaikutus laskennan tehokkuuteen: mitä paikallisempia haku- ja päivitysoperaatiot ovat, sitä tehokkaampaa algoritmin suorittama laskenta on.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
The horse industry is in many ways still operating the same way as it did in the beginning of the 20th century. At the same time the role of the horse has changed dramatically, from a beast of burden to a top athlete, a production animal or a beloved pet. A racehorse or an equestrian sport horse is trained and taken care of like any other athlete, but unlike its human counterpart, it might end up on our plate. According to European and many other countries’ laws, a horse is a production animal. The medical data of a horse should be known if it is to be slaughtered, to ensure that the meat is safe for human consumption. Today this vital medical information should be noted in the horse’s passport, but this paperbased system is not reliable. If a horse gets sold, depending on the country’s laws, the medical records might not be transferred to the new owner, the horse’s passport might get lost etc. Thus the system is not fool proof. It is not only the horse owners who have to struggle with paperwork; veterinarians as well as other officials often use much time on redundant paperwork. The main research question of this thesis is if IS could be used to help the different stakeholders within the horse industry? Veterinarians in particular who travel to stables to treat horses cannot always take with them their computers, since the somewhat unsanitary environment is not suitable for a sensitive technological device. Currently there is no common medical database developed for horses, although such a database with a support system could help with many problems. These include vaccination and disease control, food-safety, as well as export and import problems. The main stakeholders within the horse industry, including equine veterinarians and horse owners, were studied to find out their daily routines and needs for a possible support system. The research showed that there are different aspects within the horse industry where IS could be used to support the stakeholders daily routines. Thus a support system including web and mobile accessibility for the main stakeholders is under development. Since veterinarians will be the main users of this support system, it is very important to make sure that they find it useful and beneficial in their daily work. To ensure a desired result, the research and development of the system has been done iteratively with the stakeholders following the Action Design Research methodology.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
In recent decades, business intelligence (BI) has gained momentum in real-world practice. At the same time, business intelligence has evolved as an important research subject of Information Systems (IS) within the decision support domain. Today’s growing competitive pressure in business has led to increased needs for real-time analytics, i.e., so called real-time BI or operational BI. This is especially true with respect to the electricity production, transmission, distribution, and retail business since the law of physics determines that electricity as a commodity is nearly impossible to be stored economically, and therefore demand-supply needs to be constantly in balance. The current power sector is subject to complex changes, innovation opportunities, and technical and regulatory constraints. These range from low carbon transition, renewable energy sources (RES) development, market design to new technologies (e.g., smart metering, smart grids, electric vehicles, etc.), and new independent power producers (e.g., commercial buildings or households with rooftop solar panel installments, a.k.a. Distributed Generation). Among them, the ongoing deployment of Advanced Metering Infrastructure (AMI) has profound impacts on the electricity retail market. From the view point of BI research, the AMI is enabling real-time or near real-time analytics in the electricity retail business. Following Design Science Research (DSR) paradigm in the IS field, this research presents four aspects of BI for efficient pricing in a competitive electricity retail market: (i) visual data-mining based descriptive analytics, namely electricity consumption profiling, for pricing decision-making support; (ii) real-time BI enterprise architecture for enhancing management’s capacity on real-time decision-making; (iii) prescriptive analytics through agent-based modeling for price-responsive demand simulation; (iv) visual data-mining application for electricity distribution benchmarking. Even though this study is from the perspective of the European electricity industry, particularly focused on Finland and Estonia, the BI approaches investigated can: (i) provide managerial implications to support the utility’s pricing decision-making; (ii) add empirical knowledge to the landscape of BI research; (iii) be transferred to a wide body of practice in the power sector and BI research community.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.