867 resultados para Graph-based method
Resumo:
As one of the newest members in Articial Immune Systems (AIS), the Dendritic Cell Algorithm (DCA) has been applied to a range of problems. These applications mainly belong to the eld of anomaly detection. However, real-time detection, a new challenge to anomaly detection, requires improvement on the real-time capability of the DCA. To assess such capability, formal methods in the research of real-time systems can be employed. The ndings of the assessment can provide guideline for the future development of the algorithm. Therefore, in this paper we use an interval logic based method, named the Duration Calcu- lus (DC), to specify a simplied single-cell model of the DCA. Based on the DC specications with further induction, we nd that each individual cell in the DCA can perform its function as a detector in real-time. Since the DCA can be seen as many such cells operating in parallel, it is potentially capable of performing real-time detection. However, the analysis process of the standard DCA constricts its real-time capability. As a result, we conclude that the analysis process of the standard DCA should be replaced by a real-time analysis component, which can perform periodic analysis for the purpose of real-time detection.
Resumo:
Very high resolution remotely sensed images are an important tool for monitoring fragmented agricultural landscapes, which allows farmers and policy makers to make better decisions regarding management practices. An object-based methodology is proposed for automatic generation of thematic maps of the available classes in the scene, which combines edge-based and superpixel processing for small agricultural parcels. The methodology employs superpixels instead of pixels as minimal processing units, and provides a link between them and meaningful objects (obtained by the edge-based method) in order to facilitate the analysis of parcels. Performance analysis on a scene dominated by agricultural small parcels indicates that the combination of both superpixel and edge-based methods achieves a classification accuracy slightly better than when those methods are performed separately and comparable to the accuracy of traditional object-based analysis, with automatic approach.
Resumo:
This paper proposes a method for scheduling tariff time periods for electricity consumers. Europe will see a broader use of modern smart meters for electricity at residential consumers which must be used for enabling demand response. A heuristic-based method for tariff time period scheduling and pricing is proposed which considers different consumer groups with parameters studied a priori, taking advantage of demand response potential for each group and the fairness of electricity pricing for all consumers. This tool was applied to the case of Portugal, considering the actual network and generation costs, specific consumption profiles and overall electricity low voltage demand diagram. The proposed method achieves valid results. Its use will provide justification for the setting of tariff time periods by energy regulators, network operators and suppliers. It is also useful to estimate the consumer and electric sector benefits from changes in tariff time periods.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Due to design and process-related factors, there are local variations in the microstructure and mechanical behaviour of cast components. This work establishes a Digital Image Correlation (DIC) based method for characterisation and investigation of the effects of such local variations on the behaviour of a high pressure, die cast (HPDC) aluminium alloy. Plastic behaviour is studied using gradient solidified samples and characterisation models for the parameters of the Hollomon equation are developed, based on microstructural refinement. Samples with controlled microstructural variations are produced and the observed DIC strain field is compared with Finite Element Method (FEM) simulation results. The results show that the DIC based method can be applied to characterise local mechanical behaviour with high accuracy. The microstructural variations are observed to cause a redistribution of strain during tensile loading. This redistribution of strain can be predicted in the FEM simulation by incorporating local mechanical behaviour using the developed characterization model. A homogeneous FEM simulation is unable to predict the observed behaviour. The results motivate the application of a previously proposed simulation strategy, which is able to predict and incorporate local variations in mechanical behaviour into FEM simulations already in the design process for cast components.
Resumo:
Se describe la variante homocigota c.320-2A>G de TGM1 en dos hermanas con ictiosis congénita autosómica recesiva. El clonaje de los transcritos generados por esta variante permitió identificar tres mecanismos moleculares de splicing alternativos.
Resumo:
Vitis vinifera L. cv. Crimson Seedless is a late season red table grape developed in 1989, with a high market value and increasingly cultivated under protected environments to extend the availability of seedless table grapes into the late fall. The purpose of this work was to evaluate leaf water potential and sap flow as indicators of water stress in Crimson Seedless vines under standard and reduced irrigation strategy, consisting of 70 % of the standard irrigation depth. Additionally, two sub-treatments were applied, consisting of normal irrigation throughout the growing season and a short irrigation induced stress period between veraison and harvest. Leaf water potential measurements coherently signaled crop-available water variations caused by different irrigation treatments, suggesting that this plant-based method can be reliably used to identify water-stress conditions. The use of sap flow density data to establish a ratio based on a reference ‘well irrigated vine’ and less irrigated vines can potentially be used to signal differences in the transpiration rates, which may be suitable for improving irrigation management strategies while preventing undesirable levels of water stress. Although all four irrigation strategies resulted in the production of quality table grapes, significant differences (p ≤ 0.05) were found in both berry weight and sugar content between the standard irrigation and reduced irrigation treatments. Reduced irrigation increased slightly the average berry size as well as sugar content and technical maturity index. The 2-week irrigation stress period had a negative effect on these parameters.
Resumo:
Abstract Vitis vinifera L. cv. Crimson Seedless is a late season red table grape developed in 1989, with a high market value and increasingly cultivated under protected environments to extend the availability of seedless table grapes into the late fall. The purpose of this work was to evaluate leaf water potential and sap flow as indicators of water stress in Crimson Seedless vines under standard and reduced irrigation strategy, consisting of 70 % of the standard irrigation depth. Additionally, two sub-treatments were applied, consisting of normal irrigation throughout the growing season and a short irrigation induced stress period between veraison and harvest. Leaf water potential measurements coherently signaled crop-available water variations caused by different irrigation treatments, suggesting that this plant-based method can be reliably used to identify water-stress conditions. The use of sap flow density data to establish a ratio based on a reference ‘well irrigated vine’ and less irrigated vines can potentially be used to signal differences in the transpiration rates, which may be suitable for improving irrigation management strategies while preventing undesirable levels of water stress. Although all four irrigation strategies resulted in the production of quality table grapes, significant differences (p ≤ 0.05) were found in both berry weight and sugar content between the standard irrigation and reduced irrigation treatments. Reduced irrigation increased slightly the average berry size as well as sugar content and technical maturity index. The 2-week irrigation stress period had a negative effect on these parameters.
Resumo:
This dissertation, comprised of three separate studies, focuses on the relationship between remote work adoption and employee job performance, analyzing employee social isolation and job concentration as the main mediators of this relationship. It also examines the impact of concern about COVID-19 and emotional stability as moderators of these relationships. Using a survey-based method in an emergency homeworking context, the first study found that social isolation had a negative effect on remote work productivity and satisfaction, and that COVID-19 concerns affected this relationship differently for individuals with high and low levels of concern. The second study, a diary study analyzing hybrid workers, found a positive correlation between work from home (WFH) adoption and job performance through social isolation and job concentration, with emotional stability serving respectively as a buffer and booster in the relationships between WFH and the mediators. The third study, even in this case a diary study of hybrid workers, confirmed the benefits of work from home on job performance and the importance of job concentration as a mediator, while suggesting that social isolation may not be significant when studying employee job performance, but it is relevant for employee well-being. Although each study provides autonomously a discussion and research and practical implications, this dissertation also presents a general discussion on remote work and its psychological implications, highlighting areas for future research
Resumo:
INTRODUCTION Endograft deployment is a well-known cause of arterial stiffness increase as well as arterial stiffness increase represent a recognized cardiovascular risk factor. A harmful effect on cardiac function induced by the endograft deployment should be investigated. Aim of this study was to evaluate the impact of endograft deployment on the arterial stiffness and cardiac geometry of patients treated for aortic aneurysm in order to detect modifications that could justify an increased cardiac mortality at follow-up. MATHERIALS AND METHODS Over a period of 3 years, patients undergoing elective EVAR for infrarenal aortic pathologies in two university centers in Emilia Romagna were examined. All patients underwent pre-operative and six-months post-operative Pulse Wave Velocity (PWV) examination using an ultrasound-based method performed by vascular surgeons together with trans-thoracic echocardiography examination in order to evaluate cardiac chambers geometry before and after the treatment. RESULTS 69 patients were enrolled. After 36 months, 36 patients (52%) completed the 6 months follow-up examination.The ultrasound-based carotid-femoral PWV measurements performed preoperatively and 6 months after the procedure revealed a significant postoperative increase of cf-PWV (11,6±3,6 m/sec vs 12,3±8 m/sec; p.value:0,037).Postoperative LVtdV (90±28,3 ml/m2 vs 99,1±29,7 ml/m2; p.value:0.031) LVtdVi (47,4±15,9 ml/m2 vs 51,9±14,9 ml/m2; p.value:0.050), IVStd (12±1,5 mm vs 12,1±1,3 mm; p.value:0,027) were significantly increased if compared with preoperative measures.Postoperative E/A (0,76±0,26 vs 0,6±0,67; p.value:0,011), E’ lateral (9,5±2,6 vs 7,9±2,6; p.value:0,024) and A’ septal (10,8±1,5 vs 8,9±2; p.value0,005) were significantly reduced if compared with preoperative measurements CONCLUSION The endovascular treatment of the abdominal aorta causes an immediate and significant increase of the aortic stiffness.This increase reflects negatively on patients’ cardiac geometry inducing left ventricle hypertrophy and mild diastolic disfunction after just 6 months from endograft’s implantation.Further investigations and long-term results are necessary to access if this negative remodeling could affect the cardiac outcome of patient treated using the endovascular approach.
Resumo:
Earthquake prediction is a complex task for scientists due to the rare occurrence of high-intensity earthquakes and their inaccessible depths. Despite this challenge, it is a priority to protect infrastructure, and populations living in areas of high seismic risk. Reliable forecasting requires comprehensive knowledge of seismic phenomena. In this thesis, the development, application, and comparison of both deterministic and probabilistic forecasting methods is shown. Regarding the deterministic approach, the implementation of an alarm-based method using the occurrence of strong (fore)shocks, widely felt by the population, as a precursor signal is described. This model is then applied for retrospective prediction of Italian earthquakes of magnitude M≥5.0,5.5,6.0, occurred in Italy from 1960 to 2020. Retrospective performance testing is carried out using tests and statistics specific to deterministic alarm-based models. Regarding probabilistic models, this thesis focuses mainly on the EEPAS and ETAS models. Although the EEPAS model has been previously applied and tested in some regions of the world, it has never been used for forecasting Italian earthquakes. In the thesis, the EEPAS model is used to retrospectively forecast Italian shallow earthquakes with a magnitude of M≥5.0 using new MATLAB software. The forecasting performance of the probabilistic models was compared to other models using CSEP binary tests. The EEPAS and ETAS models showed different characteristics for forecasting Italian earthquakes, with EEPAS performing better in the long-term and ETAS performing better in the short-term. The FORE model based on strong precursor quakes is compared to EEPAS and ETAS using an alarm-based deterministic approach. All models perform better than a random forecasting model, with ETAS and FORE models showing better performance. However, to fully evaluate forecasting performance, prospective tests should be conducted. The lack of objective tests for evaluating deterministic models and comparing them with probabilistic ones was a challenge faced during the study.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
In this work, a prospective study conducted at the IRCCS Istituto delle Scienze Neurologiche di Bologna is presented. The aim was to investigate the brain functional connectivity of a cohort of patients (N=23) suffering from persistent olfactory dysfunction after SARS-CoV-2 infection (Post-COVID-19 syndrome), as compared to a matching group of healthy controls (N=26). In particular, starting from individual resting state functional-MRI data, different analytical approaches were adopted in order to find potential alterations in the connectivity patterns of patients’ brains. Analyses were conducted both at a whole-brain level and with a special focus on brain regions involved in the processing of olfactory stimuli (Olfactory Network). Statistical correlations between functional connectivity alterations and the results of olfactory and neuropsychological tests were investigated, to explore the associations with cognitive processes. The three approaches implemented for the analysis were the seed-based correlation analysis, the group-level Independent Component analysis and a graph-theoretical analysis of brain connectivity. Due to the relative novelty of such approaches, many implementation details and methodologies are not standardized yet and represent active research fields. Seed-based and group-ICA analyses’ results showed no statistically significant differences between groups, while relevant alterations emerged from those of the graph-based analysis. In particular, patients’ olfactory sub-graph appeared to have a less pronounced modular structure compared to the control group; locally, a hyper-connectivity of the right thalamus was observed in patients, with significant involvement of the right insula and hippocampus. Results of an exploratory correlation analysis showed a positive correlation between the graphs global modularity and the scores obtained in olfactory tests and negative correlations between the thalamus hyper-connectivity and memory tests scores.
Resumo:
Our objective for this thesis work was the deployment of a Neural Network based approach for video object detection on board a nano-drone. Furthermore, we have studied some possible extensions to exploit the temporal nature of videos to improve the detection capabilities of our algorithm. For our project, we have utilized the Mobilenetv2/v3SSDLite due to their limited computational and memory requirements. We have trained our networks on the IMAGENET VID 2015 dataset and to deploy it onto the nano-drone we have used the NNtool and Autotiler tools by GreenWaves. To exploit the temporal nature of video data we have tried different approaches: the introduction of an LSTM based convolutional layer in our architecture, the introduction of a Kalman filter based tracker as a postprocessing step to augment the results of our base architecture. We have obtain a total improvement in our performances of about 2.5 mAP with the Kalman filter based method(BYTE). Our detector run on a microcontroller class processor on board the nano-drone at 1.63 fps.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.