877 resultados para pervasive computing,home intelligence,context-awareness,domotica,prolog,tuProlog,sensori
Resumo:
A link between patterns of pelvic growth and human life history is supported by the finding that, cross-culturally, variation in maturation rates of female pelvis are correlated with variation in ages of menarche and first reproduction, i.e., it is well known that the human dimensions of the pelvic bones depend on the gender and vary with the age. Indeed, one feature in which humans appear to be unique is the prolonged growth of the pelvis after the age of sexual maturity. Both the total superoinferior length and mediolateral breadth of the pelvis continues to grow markedly after puberty, and do not reach adult proportions until the late teens years. This continuation of growth is accomplished by relatively late fusion of the separate centers of ossification that form the bones of the pelvis. Hence, in this work we will focus on the development of an intelligent decision support system to predict individual’s age based on a pelvis' dimensions criteria. Some basic image processing techniques were applied in order to extract the relevant features from pelvic X-rays, being the computational framework built on top of a Logic Programming approach to Knowledge Representation and Reasoning that caters for the handling of incomplete, unknown, or even self-contradictory information, complemented with a Case Base approach to computing.
Resumo:
Dyscalculia is usually perceived of as a specific learning difficulty for mathematics or, more appropriately, arithmetic. Because definitions and diagnoses of dyscalculia are in their infancy and sometimes are contradictory. However, mathematical learning difficulties are certainly not in their infancy and are very prevalent and often devastating in their impact. Co-occurrence of learning disorders appears to be the rule rather than the exception. Co-occurrence is generally assumed to be a consequence of risk factors that are shared between disorders, for example, working memory. However, it should not be assumed that all dyslexics have problems with mathematics, although the percentage may be very high, or that all dyscalculics have problems with reading and writing. Because mathematics is very developmental, any insecurity or uncertainty in early topics will impact on later topics, hence to need to take intervention back to basics. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work will focus on the development of a Decision Support System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming approach to Knowledge Representation and Reasoning, grounded on a Case-based approach to computing, that allows for the handling of incomplete, unknown, or even self-contradictory information.
Resumo:
This paper presents the results of a qualitative study aimed at analyzing the teacher’s role in promoting awareness and management of emotions in fifth-graders, as competencies of emotional intelligence. This resulted in a very significant study since, from the psychopedagogic perspective, it aims at breaking with the traditional role of teachers exclusively focused on transmitting knowledge, leaving aside the much-needed emotional support. Children demonstrated a poor vocabulary, as well as difficulty to identify some emotions and differentiate between them. This means a limitation for children to be aware of their own emotions and to control them. As a conclusion, it is important to maximize the emotional capacities of students; it should be a primary task in the education centers, where teachers play a key role as a model and as a promoter of emotional intelligence.
Resumo:
Internet of Things systems are pervasive systems evolved from cyber-physical to large-scale systems. Due to the number of technologies involved, software development involves several integration challenges. Among them, the ones preventing proper integration are those related to the system heterogeneity, and thus addressing interoperability issues. From a software engineering perspective, developers mostly experience the lack of interoperability in the two phases of software development: programming and deployment. On the one hand, modern software tends to be distributed in several components, each adopting its most-appropriate technology stack, pushing programmers to code in a protocol- and data-agnostic way. On the other hand, each software component should run in the most appropriate execution environment and, as a result, system architects strive to automate the deployment in distributed infrastructures. This dissertation aims to improve the development process by introducing proper tools to handle certain aspects of the system heterogeneity. Our effort focuses on three of these aspects and, for each one of those, we propose a tool addressing the underlying challenge. The first tool aims to handle heterogeneity at the transport and application protocol level, the second to manage different data formats, while the third to obtain optimal deployment. To realize the tools, we adopted a linguistic approach, i.e.\ we provided specific linguistic abstractions that help developers to increase the expressive power of the programming language they use, writing better solutions in more straightforward ways. To validate the approach, we implemented use cases to show that the tools can be used in practice and that they help to achieve the expected level of interoperability. In conclusion, to move a step towards the realization of an integrated Internet of Things ecosystem, we target programmers and architects and propose them to use the presented tools to ease the software development process.
Resumo:
Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.
Resumo:
The importance of networks, in their broad sense, is rapidly and massively growing in modern-day society thanks to unprecedented communication capabilities offered by technology. In this context, the radio spectrum will be a primary resource to be preserved and not wasted. Therefore, the need for intelligent and automatic systems for in-depth spectrum analysis and monitoring will pave the way for a new set of opportunities and potential challenges. This thesis proposes a novel framework for automatic spectrum patrolling and the extraction of wireless network analytics. It aims to enhance the physical layer security of next generation wireless networks through the extraction and the analysis of dedicated analytical features. The framework consists of a spectrum sensing phase, carried out by a patrol composed of numerous radio-frequency (RF) sensing devices, followed by the extraction of a set of wireless network analytics. The methodology developed is blind, allowing spectrum sensing and analytics extraction of a network whose key features (i.e., number of nodes, physical layer signals, medium access protocol (MAC) and routing protocols) are unknown. Because of the wireless medium, over-the-air signals captured by the sensors are mixed; therefore, blind source separation (BSS) and measurement association are used to estimate the number of sources and separate the traffic patterns. After the separation, we put together a set of methodologies for extracting useful features of the wireless network, i.e., its logical topology, the application-level traffic patterns generated by the nodes, and their position. The whole framework is validated on an ad-hoc wireless network accounting for MAC protocol, packet collisions, nodes mobility, the spatial density of sensors, and channel impairments, such as path-loss, shadowing, and noise. The numerical results obtained by extensive and exhaustive simulations show that the proposed framework is consistent and can achieve the required performance.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
This thesis is about the smart home, a connected ambience that will help consumers to live a more environmentally sustainable life and will help vulnerable categories of consumers to live a more autonomous life, thanks to the pervasive use of the Internet of Things (IoT) technology. In particular, civil liability for the malfunctioning of the smart home is the filter through which the research is carried out. I analyse whether the actual legal liability rules are ready or not to adapt to this new connected environment, such as the IoT-powered smart home. Through careful mapping of the technical and legal state of the art, the thesis argues that the EU rules on product liability contained in the Product Liability Directive (PLD) will apply consistently to these objects. This holds true even if at the time of the drafting of the thesis, the proposal on the update of the PLD had not been published yet. Through the analysis of past PLD cases, new American products liability case-law on domestic IoT objects and the latest legal scholarship’s contributions and policy inputs it was possible to anticipate some of the contents of the newly published EU PLD Update proposal.
Resumo:
Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.
Resumo:
This study investigates interactions between parents and pediatricians during pediatric well-child visits. Despite constituting a pivotal moment for monitoring and evaluating children’s development during the critical ‘first thousand days of life’ and for family support, no study has so far empirically investigated the in vivo realization of pediatrician-parent interactions in the Italian context, especially not from a pedagogical perspective. Filling this gap, the present study draws on a corpus of 23 videorecorded well-child visits involving two pediatricians and twenty-two families with children aged between 0 and 18 months. Combining an ethnographic perspective and conversation analysis theoretical-analytical constructs, the micro-analysis of interactions reveals how well-child visits unfold as culture-oriented and culture-making sites. By zooming into what actually happens during these visits, the analysis shows that there is much more than the “mere” accomplishment of institutionally relevant activities like assessing children’s health or giving parents advice on baby care. Rather, through the interactional ways these institutional tasks are carried out, parents and pediatricians presuppose, ratify, and transmit culturally-informed models of “normal” growth, “healthy” development, “good” caring practices, and “competent” parenting, thereby enacting a pervasive yet unnoticed educational and moral work. Inaugurating a new promising line of inquiry within Italian pedagogical research, this study illuminates how a) pediatricians work as a “social antenna”, bridging families’ private “small cultures” and broader socio-cultural models of children’s well-being and caregiving practices, and b) parents act as agentive, knowledgeable, (communicatively) competent, and caring parents, while also sensitive to the pediatrician’s ultimate epistemic and deontic authority. I argue that a video-based, micro-analysis of interactions represents a heuristically powerful instrument for raising pediatricians’ and parents’ awareness of the educational and moral density of well-child visits. Insights from this study can constitute a valuable empirical resource for underpinning medical and parental training programs aimed at fostering pediatricians’ and parents’ reflexivity.
Resumo:
This study aims to explore the Italian students’ perspectives on using English in English-medium instruction (EMI) programs in light of the practices of internationalization at home (IaH) at the University of Bologna (UNIBO) in Italy and further investigates whether these attitudes affect their language identity as English as lingua franca (ELF) users. To serve this aim, a mixed-method approach was adopted to collect quantitative and in-depth qualitative data in two phases through an online survey and a semi-structured interview. A total number of 78 Italian students participated in the survey, out of which 14 participants were interviewed. The findings of the online survey indicated that most participants (92%) held a positive perspective toward the use of English in EMI programs and the findings from the interviews were in line with the results of the survey. However, the purpose of the interviews was to explore the participants’ views on their language identity as ELF users. Thematic analysis of the interviews revealed that students experience emotional, cognitive, and social transitions in EMI programs in response to their shift from a non-EMI to an EMI academic setting. Overall, all the above-mentioned transitions were positive and could lead to personal development. However, it can be concluded that the EMI context provides few opportunities for the emergence of significant new subject positions mediated by English in this study. The focus on students’ perspectives on the use of English in EMI programs can contribute to the improvement in language policy planning and internationalized curriculum design by policymakers and alleviate tensions over the controversial issue of the Englishization of higher education by considering how EMI students perceive their use of English as ELF users not superior standard English users.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
The Internet of Vehicles (IoV) paradigm has emerged in recent times, where with the support of technologies like the Internet of Things and V2X , Vehicular Users (VUs) can access different services through internet connectivity. With the support of 6G technology, the IoV paradigm will evolve further and converge into a fully connected and intelligent vehicular system. However, this brings new challenges over dynamic and resource-constrained vehicular systems, and advanced solutions are demanded. This dissertation analyzes the future 6G enabled IoV systems demands, corresponding challenges, and provides various solutions to address them. The vehicular services and application requests demands proper data processing solutions with the support of distributed computing environments such as Vehicular Edge Computing (VEC). While analyzing the performance of VEC systems it is important to take into account the limited resources, coverage, and vehicular mobility into account. Recently, Non terrestrial Networks (NTN) have gained huge popularity for boosting the coverage and capacity of terrestrial wireless networks. Integrating such NTN facilities into the terrestrial VEC system can address the above mentioned challenges. Additionally, such integrated Terrestrial and Non-terrestrial networks (T-NTN) can also be considered to provide advanced intelligent solutions with the support of the edge intelligence paradigm. In this dissertation, we proposed an edge computing-enabled joint T-NTN-based vehicular system architecture to serve VUs. Next, we analyze the terrestrial VEC systems performance for VUs data processing problems and propose solutions to improve the performance in terms of latency and energy costs. Next, we extend the scenario toward the joint T-NTN system and address the problem of distributed data processing through ML-based solutions. We also proposed advanced distributed learning frameworks with the support of a joint T-NTN framework with edge computing facilities. In the end, proper conclusive remarks and several future directions are provided for the proposed solutions.
Resumo:
With the advent of high-performance computing devices, deep neural networks have gained a lot of popularity in solving many Natural Language Processing tasks. However, they are also vulnerable to adversarial attacks, which are able to modify the input text in order to mislead the target model. Adversarial attacks are a serious threat to the security of deep neural networks, and they can be used to craft adversarial examples that steer the model towards a wrong decision. In this dissertation, we propose SynBA, a novel contextualized synonym-based adversarial attack for text classification. SynBA is based on the idea of replacing words in the input text with their synonyms, which are selected according to the context of the sentence. We show that SynBA successfully generates adversarial examples that are able to fool the target model with a high success rate. We demonstrate three advantages of this proposed approach: (1) effective - it outperforms state-of-the-art attacks by semantic similarity and perturbation rate, (2) utility-preserving - it preserves semantic content, grammaticality, and correct types classified by humans, and (3) efficient - it performs attacks faster than other methods.
Resumo:
The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.