954 resultados para Context-aware applications


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public policies to support entrepreneurship and innovation play a vital role when firms have difficulties in accessing external finance. However, some authors have found evidence of long-term inefficiency in subsidized firms (Bernini and Pelligrini, 2011; Cerqua and Pelligrini, 2014) and ineffectiveness of public funds (Jorge and Suárez, 2011). The aim of the paper is to assess the effectiveness in the selection process of applications to public financial support for stimulating innovation. Using a binary choice model, we investigate which factors influence the probability of obtaining public support for an innovative investment. The explanatory variables are connected to firm profile, the characteristics of the project and the macroeconomic environment. The analysis is based on the case study of the Portuguese Innovation.Incentive System (PIIS) and on the applications managed by the Alentejo Regional Operational Program in the period 2007 – 2013. The results show that the selection process is more focused on the expected impact of the project than on the firm’s past performance. Factors that influence the credit risk and the decision to grant a bank loan do not seem to influence the government evaluator regarding the funding of some projects. Past activities in R&D do not significantly affect the probability of having an application approved under the PIIS, whereas an increase in the number of patents and the number of skilled jobs are both relevant factors. Nevertheless, some evidence of firms’ short-term inefficiency was found, in that receiving public financial support is linked to a smaller increase in productivity compared to non-approved firm applications. At the macroeconomic level, periods with a higher cost of capital in financial markets are linked to a greater probability of getting an application for public support approved, which could be associated with the effectiveness of public support in correcting market failings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monolithic materials cannot always satisfy the demands of today’s advanced requirements. Only by combining several materials at different length-scales, as nature does, the requested performances can be met. Polymer nanocomposites are intended to overcome the common drawbacks of pristine polymers, with a multidisciplinary collaboration of material science with chemistry, engineering, and nanotechnology. These materials are an active combination of polymers and nanomaterials, where at least one phase lies in the nanometer range. By mimicking nature’s materials is possible to develop new nanocomposites for structural applications demanding combinations of strength and toughness. In this perspective, nanofibers obtained by electrospinning have been increasingly adopted in the last decade to improve the fracture toughness of Fiber Reinforced Plastic (FRP) laminates. Although nanofibers have already found applications in various fields, their widespread introduction in the industrial context is still a long way to go. This thesis aims to develop methodologies and models able to predict the behaviour of nanofibrous-reinforced polymers, paving the way for their practical engineering applications. It consists of two main parts. The first one investigates the mechanisms that act at the nanoscale, systematically evaluating the mechanical properties of both the nanofibrous reinforcement phase (Chapter 1) and hosting polymeric matrix (Chapter 2). The second part deals with the implementation of different types of nanofibers for novel pioneering applications, trying to combine the well-known fracture toughness enhancement in composite laminates with improving other mechanical properties or including novel functionalities. Chapter 3 reports the development of novel adhesive carriers made of nylon 6,6 nanofibrous mats to increase the fracture toughness of epoxy-bonded joints. In Chapter 4, recently developed rubbery nanofibers are used to enhance the damping properties of unidirectional carbon fiber laminates. Lastly, in Chapter 5, a novel self-sensing composite laminate capable of detecting impacts on its surface using PVDF-TrFE piezoelectric nanofibers is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, the contribution of the transportation sector on greenhouse gases is evident. The fast consumption of fossil fuels and its impact on the environment has given a strong impetus to the development of vehicles with better fuel economy. Hybrid electric vehicles fit into this context with different targets, starting from the reduction of emissions and fuel consumption, but also for performance and comfort enhancement. Vehicles exist with various missions; super sport cars usually aim to reach peak performance and to guarantee a great driving experience to the driver, but great attention must also be paid to fuel consumption. According to the vehicle mission, hybrid vehicles can differ in the powertrain configuration and the choice of the energy storage system. Lamborghini has recently invested in the development of hybrid super sport cars, due to performance and comfort reasons, with the possibility to reduce fuel consumption. This research activity has been conducted as a joint collaboration between the University of Bologna and the sportscar manufacturer, to analyze the impact of innovative energy storage solutions on the hybrid vehicle performance. Capacitors have been studied and modeled to analyze the pros and cons of such solution with respect to batteries. To this aim, a full simulation environment has been developed and validated to provide a concept design tool capable of precise results and able to foresee the longitudinal performance on regulated emission cycles and real driving conditions, with a focus on fuel consumption. In addition, the target of the research activity is to deepen the study of hybrid electric super sports cars in the concept development phase, focusing on defining the control strategies and the energy storage system’s technology that best suits the needs of the vehicles. This dissertation covers the key steps that have been carried out in the research project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation starts by providing a description of the phenomena related to the increasing importance recently acquired by satellite applications. The spread of such technology comes with implications, such as an increase in maintenance cost, from which derives the interest in developing advanced techniques that favor an augmented autonomy of spacecrafts in health monitoring. Machine learning techniques are widely employed to lay a foundation for effective systems specialized in fault detection by examining telemetry data. Telemetry consists of a considerable amount of information; therefore, the adopted algorithms must be able to handle multivariate data while facing the limitations imposed by on-board hardware features. In the framework of outlier detection, the dissertation addresses the topic of unsupervised machine learning methods. In the unsupervised scenario, lack of prior knowledge of the data behavior is assumed. In the specific, two models are brought to attention, namely Local Outlier Factor and One-Class Support Vector Machines. Their performances are compared in terms of both the achieved prediction accuracy and the equivalent computational cost. Both models are trained and tested upon the same sets of time series data in a variety of settings, finalized at gaining insights on the effect of the increase in dimensionality. The obtained results allow to claim that both models, combined with a proper tuning of their characteristic parameters, successfully comply with the role of outlier detectors in multivariate time series data. Nevertheless, under this specific context, Local Outlier Factor results to be outperforming One-Class SVM, in that it proves to be more stable over a wider range of input parameter values. This property is especially valuable in unsupervised learning since it suggests that the model is keen to adapting to unforeseen patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing environmental global regulations have directed scientific research towards more sustainable materials, even in the field of composite materials for additive manufacturing. In this context, the presented research is devoted to the development of thermoplastic composites for FDM application with a low environmental impact, focusing on the possibility to use wastes from different industrial processes as filler for the production of composite filaments for FDM 3D printing. In particular carbon fibers recycled by pyro-gasification process of CFRP scraps were used as reinforcing agent for PLA, a biobased polymeric matrix. Since the high value of CFs, the ability to re-use recycled CFs, replacing virgin ones, seems to be a promising option in terms of sustainability and circular economy. Moreover, wastes from different agricultural industries, i.e. wheat and rice production processes, were valorised and used as biofillers for the production of PLA-biocomposites. The integration of these agricultural wastes into PLA bioplastic allowed to obtain biocomposites with improved eco-sustainability, biodegradability, lightweight, and lower cost. Finally, the study of novel composites for FDM was extended towards elastomeric nanocomposite materials, in particular TPU reinforced with graphene. The research procedure of all projects involves the optimization of production methods of composite filaments with a particular attention on the possible degradation of polymeric matrices. Then, main thermal properties of 3D printed object are evaluated by TGA, DSC characterization. Additionally, specific heat capacity (CP) and Coefficient of Linear Thermal Expansion (CLTE) measurements are useful to estimate the attitude of composites for the prevention of typical FDM issues, i.e. shrinkage and warping. Finally, the mechanical properties of 3D printed composites and their anisotropy are investigated by tensile test using distinct kinds of specimens with different printing angles with respect to the testing direction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance technique that can quantify in vivo biomarkers of pathology, such as alteration in iron and myelin concentration. It allows for the comparison of magnetic susceptibility properties within and between different subject groups. In this thesis, QSM acquisition and processing pipeline are discussed, together with clinical and methodological applications of QSM to neurodegeneration. In designing the studies, significant emphasis was placed on results reproducibility and interpretability. The first project focuses on the investigation of cortical regions in amyotrophic lateral sclerosis. By examining various histogram susceptibility properties, a pattern of increased iron content was revealed in patients with amyotrophic lateral sclerosis compared to controls and other neurodegenerative disorders. Moreover, there was a correlation between susceptibility and upper motor neuron impairment, particularly in patients experiencing rapid disease progression. Similarly, in the second application, QSM was used to examine cortical and sub-cortical areas in individuals with myotonic dystrophy type 1. The thalamus and brainstem were identified as structures of interest, with relevant correlations with clinical and laboratory data such as neurological evaluation and sleep records. In the third project, a robust pipeline for assessing radiomic susceptibility-based features reliability was implemented within a cohort of patients with multiple sclerosis and healthy controls. Lastly, a deep learning super-resolution model was applied to QSM images of healthy controls. The employed model demonstrated excellent generalization abilities and outperformed traditional up-sampling methods, without requiring a customized re-training. Across the three disorders investigated, it was evident that QSM is capable of distinguishing between patient groups and healthy controls while establishing correlations between imaging measurements and clinical data. These studies lay the foundation for future research, with the ultimate goal of achieving earlier and less invasive diagnoses of neurodegenerative disorders within the context of personalized medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In modern society, security issues of IT Systems are intertwined with interdisciplinary aspects, from social life to sustainability, and threats endanger many aspects of every- one’s daily life. To address the problem, it’s important that the systems that we use guarantee a certain degree of security, but to achieve this, it is necessary to be able to give a measure to the amount of security. Measuring security is not an easy task, but many initiatives, including European regulations, want to make this possible. One method of measuring security is based on the use of security metrics: those are a way of assessing, from various aspects, vulnera- bilities, methods of defense, risks and impacts of successful attacks then also efficacy of reactions, giving precise results using mathematical and statistical techniques. I have done literature research to provide an overview on the meaning, the effects, the problems, the applications and the overall current situation over security metrics, with particular emphasis in giving practical examples. This thesis starts with a summary of the state of the art in the field of security met- rics and application examples to outline the gaps in current literature, the difficulties found in the change of application context, to then advance research questions aimed at fostering the discussion towards the definition of a more complete and applicable view of the subject. Finally, it stresses the lack of security metrics that consider interdisciplinary aspects, giving some potential starting point to develop security metrics that cover all as- pects involved, taking the field to a new level of formal soundness and practical usability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless power transfer is becoming a crucial and demanding task in the IoT world. Despite the already known solutions exploiting a near-field powering approach, far-field WPT is definitely more challenging, and commercial applications are not available yet. This thesis proposes the recent frequency-diverse array technology as a potential candidate for realizing smart and reconfigurable far-field WPT solutions. In the first section of this work, an analysis on some FDA systems is performed, identifying the planar array with circular geometry as the most promising layout in terms of radiation properties. Then, a novel energy aware solution to handle the critical time variability of the FDA beam pattern is proposed. It consists on a time-control strategy through a triangular pulse, and it allows to achieve ad-hoc and real time WPT. Moreover, an essential frequency domain analysis of the radiating behaviour of a pulsed FDA system is presented. This study highlights the benefits of exploiting the intrinsic pulse harmonics for powering purposes, thus minimising the power loss. Later, the electromagnetic design of a radial FDA architecture is addressed. In this context, an exhaustive investigation on miniaturization techniques is carried out; the use of multiple shorting pins together with a meandered feeding network has been selected as a powerful solution to halve the original prototype dimension. Finally, accurate simulations of the designed radial FDA system are performed, and the obtained results are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following the latest environmental concerns, the importance of minimising the detrimental effect of emissions of terrestrial vehicles has become a major goal for the whole automotive field. The key to achieve an emission-free long term future is the electrification of vehicle fleets; this huge step cannot be taken without intermediate technologies. In this context, hybrid vehicles are fundamental to reach this goal. Specifically, mild hybrid vehicles represent a trade-off between cost and emissions that could act now as a bridge towards electrification. Like the industry, also student engineering competitions are likely to take the same route: Combustion vehicles may well turn into hybrid vehicles. For this reason, a preliminary design overview is necessary to pinpoint the key performance indicators for the prototypes of the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protocols for the generation of dendritic cells (DCs) using serum as a supplementation of culture media leads to reactions due to animal proteins and disease transmissions. Several types of serum-free media (SFM), based on good manufacture practices (GMP), have recently been used and seem to be a viable option. The aim of this study was to evaluate the results of the differentiation, maturation, and function of DCs from Acute Myeloid Leukemia patients (AML), generated in SFM and medium supplemented with autologous serum (AS). DCs were analyzed by phenotype characteristics, viability, and functionality. The results showed the possibility of generating viable DCs in all the conditions tested. In patients, the X-VIVO 15 medium was more efficient than the other media tested in the generation of DCs producing IL-12p70 (p=0.05). Moreover, the presence of AS led to a significant increase of IL-10 by DCs as compared with CellGro (p=0.05) and X-Vivo15 (p=0.05) media, both in patients and donors. We concluded that SFM was efficient in the production of DCs for immunotherapy in AML patients. However, the use of AS appears to interfere with the functional capacity of the generated DCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frailty and anemia in the elderly appear to share a common pathophysiology associated with chronic inflammatory processes. This study uses an analytical, cross-sectional, population-based methodology to investigate the probable relationships between frailty, red blood cell parameters and inflammatory markers in 255 community-dwelling elders aged 65 years or older. The frailty phenotype was assessed by non-intentional weight loss, fatigue, low grip strength, low energy expenditure and reduced gait speed. Blood sample analyses were performed to determine hemoglobin level, hematocrit and reticulocyte count, as well as the inflammatory variables IL-6, IL-1ra and hsCRP. In the first multivariate analysis (model I), considering only the erythroid parameters, Hb concentration was a significant variable for both general frailty status and weight loss: a 1.0g/dL drop in serum Hb concentration represented a 2.02-fold increase (CI 1.12-3.63) in an individual's chance of being frail. In the second analysis (model II), which also included inflammatory cytokine levels, hsCRP was independently selected as a significant variable. Each additional year of age represented a 1.21-fold increase in the chance of being frail, and each 1-unit increase in serum hsCRP represented a 3.64-fold increase in the chance of having the frailty phenotype. In model II reticulocyte counts were associated with weight loss and reduced metabolic expenditure criteria. Our findings suggest that reduced Hb concentration, reduced RetAbs count and elevated serum hsCRP levels should be considered components of frailty, which in turn is correlated with sarcopenia, as evidenced by weight loss.