884 resultados para Connectivity,Connected Car,Big Data,KPI


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Big Manistee River was one of the most well known Michigan rivers to historically support a population of Arctic grayling (Thymallus arctics). Overfishing, competition with introduced fish, and habitat loss due to logging are believed to have caused their decline and ultimate extirpation from the Big Manistee River around 1900 and from the State of Michigan by 1936. Grayling are a species of great cultural importance to Little River Band of Ottawa Indian tribal heritage and although past attempts to reintroduce Arctic grayling have been unsuccessful, a continued interest in their return led to the assessment of environmental conditions of tributaries within a 21 kilometer section of the Big Manistee River to determine if suitable habitat exists. Although data describing historical conditions in the Big Manistee River is limited, we reviewed the literature to determine abiotic conditions prior to Arctic grayling disappearance and the habitat conditions in rivers in western and northwestern North America where they currently exist. We assessed abiotic habitat metrics from 23 sites distributed across 8 tributaries within the Manistee River watershed. Data collected included basic water parameters, streambed substrate composition, channel profile and areal measurements of channel geomorphic unit, and stream velocity and discharge measurements. These environmental condition values were compared to literature values, habitat suitability thresholds, and current conditions of rivers with Arctic grayling populations to assess the feasibility of the abiotic habitat in Big Manistee River tributaries to support Arctic grayling. Although the historic grayling habitat in the region was disturbed during the era of major logging around the turn of the 20th century, our results indicate that some important abiotic conditions within Big Manistee River tributaries are within the range of conditions that support current and past populations of Arctic grayling. Seven tributaries contained between 20-30% pools by area, used by grayling for refuge. All but two tributaries were composed primarily of pebbles, with the remaining two dominated by fine substrates (sand, silt, clay). Basic water parameters and channel depth were within the ranges of those found for populations of Arctic grayling persisting in Montana, Alaska, and Canada for all tributaries. Based on the metrics analyzed in this study, suitable abiotic grayling habitat does exist in Big Manistee River tributaries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the exponential growth of the usage of web-based map services, the web GIS application has become more and more popular. Spatial data index, search, analysis, visualization and the resource management of such services are becoming increasingly important to deliver user-desired Quality of Service. First, spatial indexing is typically time-consuming and is not available to end-users. To address this, we introduce TerraFly sksOpen, an open-sourced an Online Indexing and Querying System for Big Geospatial Data. Integrated with the TerraFly Geospatial database [1-9], sksOpen is an efficient indexing and query engine for processing Top-k Spatial Boolean Queries. Further, we provide ergonomic visualization of query results on interactive maps to facilitate the user’s data analysis. Second, due to the highly complex and dynamic nature of GIS systems, it is quite challenging for the end users to quickly understand and analyze the spatial data, and to efficiently share their own data and analysis results with others. Built on the TerraFly Geo spatial database, TerraFly GeoCloud is an extra layer running upon the TerraFly map and can efficiently support many different visualization functions and spatial data analysis models. Furthermore, users can create unique URLs to visualize and share the analysis results. TerraFly GeoCloud also enables the MapQL technology to customize map visualization using SQL-like statements [10]. Third, map systems often serve dynamic web workloads and involve multiple CPU and I/O intensive tiers, which make it challenging to meet the response time targets of map requests while using the resources efficiently. Virtualization facilitates the deployment of web map services and improves their resource utilization through encapsulation and consolidation. Autonomic resource management allows resources to be automatically provisioned to a map service and its internal tiers on demand. v-TerraFly are techniques to predict the demand of map workloads online and optimize resource allocations, considering both response time and data freshness as the QoS target. The proposed v-TerraFly system is prototyped on TerraFly, a production web map service, and evaluated using real TerraFly workloads. The results show that v-TerraFly can accurately predict the workload demands: 18.91% more accurate; and efficiently allocate resources to meet the QoS target: improves the QoS by 26.19% and saves resource usages by 20.83% compared to traditional peak load-based resource allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective decision making uses various databases including both micro and macro level datasets. In many cases it is a big challenge to ensure the consistency of the two levels. Different types of problems can occur and several methods can be used to solve them. The paper concentrates on the input alignment of the households’ income for microsimulation, which means refers to improving the elements of a micro data survey (EU-SILC) by using macro data from administrative sources. We use a combined micro-macro model called ECONS-TAX for this improvement. We also produced model projections until 2015 which is important because the official EU-SILC micro database will only be available in Hungary in the summer of 2017. The paper presents our estimations about the dynamics of income elements and the changes in income inequalities. Results show that the aligned data provides a different level of income inequality, but does not affect the direction of change from year to year. However, when we analyzed policy change, the use of aligned data caused larger differences both in income levels and in their dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel route to prepare highly active and stable N2O decomposition catalysts is presented, based on Fe-exchanged beta zeolite. The procedure consists of liquid phase Fe(III) exchange at low pH. By varying the pH systematically from 3.5 to 0, using nitric acid during each Fe(III)-exchange procedure, the degree of dealumination was controlled, verified by ICP and NMR. Dealumination changes the presence of neighbouring octahedral Al sites of the Fe sites, improving the performance for this reaction. The so-obtained catalysts exhibit a remarkable enhancement in activity, for an optimal pH of 1. Further optimization by increasing the Fe content is possible. The optimal formulation showed good conversion levels, comparable to a benchmark Fe-ferrierite catalyst. The catalyst stability under tail gas conditions containing NO, O2 and H2O was excellent, without any appreciable activity decay during 70 h time on stream. Based on characterisation and data analysis from ICP, single pulse excitation NMR, MQ MAS NMR, N2 physisorption, TPR(H2) analysis and apparent activation energies, the improved catalytic performance is attributed to an increased concentration of active sites. Temperature programmed reduction experiments reveal significant changes in the Fe(III) reducibility pattern with the presence of two reduction peaks; tentatively attributed to the interaction of the Fe-oxo species with electron withdrawing extraframework AlO6 species, causing a delayed reduction. A low-temperature peak is attributed to Fe-species exchanged on zeolitic AlO4 sites, which are partially charged by the presence of the neighbouring extraframework AlO6 sites. Improved mass transport phenomena due to acid leaching is ruled out. The increased activity is rationalized by an active site model, whose concentration increases by selectively washing out the distorted extraframework AlO6 species under acidic (optimal) conditions, liberating active Fe species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the 380V DC and 48V DC distribution systems have been extensively studied for the latest data centers. It is widely believed that the 380V DC system is a very promising candidate because of its lower cable cost compared to the 48V DC system. However, previous studies have not adequately addressed the low reliability issue with the 380V DC systems due to large amount of series connected batteries. In this thesis, a quantitative comparison for the two systems has been presented in terms of efficiency, reliability and cost. A new multi-port DC UPS with both high voltage output and low voltage output is proposed. When utility ac is available, it delivers power to the load through its high voltage output and charges the battery through its low voltage output. When utility ac is off, it boosts the low battery voltage and delivers power to the load form the battery. Thus, the advantages of both systems are combined and the disadvantages of them are avoided. High efficiency is also achieved as only one converter is working in either situation. Details about the design and analysis of the new UPS are presented. For the main AC-DC part of the new UPS, a novel bridgeless three-level single-stage AC-DC converter is proposed. It eliminates the auxiliary circuit for balancing the capacitor voltages and the two bridge rectifier diodes in previous topology. Zero voltage switching, high power factor, and low component stresses are achieved with this topology. Compared to previous topologies, the proposed converter has a lower cost, higher reliability, and higher efficiency. The steady state operation of the converter is analyzed and a decoupled model is proposed for the converter. For the battery side converter as a part of the new UPS, a ZVS bidirectional DC-DC converter based on self-sustained oscillation control is proposed. Frequency control is used to ensure the ZVS operation of all four switches and phase shift control is employed to regulate the converter output power. Detailed analysis of the steady state operation and design of the converter are presented. Theoretical, simulation, and experimental results are presented to verify the effectiveness of the proposed concepts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Short time-to-market is a key success factor in the todays’ dynamic business environment and many companies are trying to improve their product development processes. A challenge is to develop products according to the time plan and at the same time keeping the cost low and the quality high. This study focuses on the project management within the product development process in an automotive industry. The background of this study started as a request from the research and development department at the automotive company, which led to the following questions; 1) what are the most crucial factors for project success? 2) How can these factors contribute to a more successful outcome? 3) How can project management decrease product development lead time by sharing knowledge? The research approach is a case study and the data collection consist of interviews and questioners at two companies connected to project management in product development projects. Spider charts are created from the collected data containing eleven dimensions to show similarities and differences between the project managers working within the research and development department as well as between the two companies. The main conclusions are that there is a need to allow a certain level of flexibility when managing projects, in order to more easily handle late changes. Being involved in a project from the concept phase could facilitate the product development activities later on, due to a deeper understanding regarding previous decisions. Further, knowledge sharing methods, such as databases, has to be designed to be suitable for a specific organization and user friendly which enables the users to more easily search for specific types of knowledge. Lastly, a low level on the detailed focus is shown to be another success factor, however, in some cases there is still a need of this detailed focus to solve specific problems but the details may never become a higher focus than the holistic view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging research involves analyses of huge amounts of biological data that might or might not be related with cognition. This relationship is usually approached using univariate methods, and, therefore, correction methods are mandatory for reducing false positives. Nevertheless, the probability of false negatives is also increased. Multivariate frameworks have been proposed for helping to alleviate this balance. Here we apply multivariate distance matrix regression for the simultaneous analysis of biological and cognitive data, namely, structural connections among 82 brain regions and several latent factors estimating cognitive performance. We tested whether cognitive differences predict distances among individuals regarding their connectivity pattern. Beginning with 3,321 connections among regions, the 36 edges better predicted by the individuals' cognitive scores were selected. Cognitive scores were related to connectivity distances in both the full (3,321) and reduced (36) connectivity patterns. The selected edges connect regions distributed across the entire brain and the network defined by these edges supports high-order cognitive processes such as (a) (fluid) executive control, (b) (crystallized) recognition, learning, and language processing, and (c) visuospatial processing. This multivariate study suggests that one widespread, but limited number, of regions in the human brain, supports high-level cognitive ability differences. Hum Brain Mapp, 2016. © 2016 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The language connectome was in-vivo investigated using multimodal non-invasive quantitative MRI. In PPA patients (n=18) recruited by the IRCCS ISNB, Bologna, cortical thickness measures showed a predominant reduction on the left hemisphere (p<0.005) with respect to matched healthy controls (HC) (n=18), and an accuracy of 86.1% in discrimination from Alzheimer’s disease patients (n=18). The left temporal and para-hippocampal gyri significantly correlated (p<0.01) with language fluency. In PPA patients (n=31) recruited by the Northwestern University Chicago, DTI measures were longitudinally evaluated (2-years follow-up) under the supervision of Prof. M. Catani, King’s College London. Significant differences with matched HC (n=27) were found, tract-localized at baseline and widespread in the follow-up. Language assessment scores correlated with arcuate (AF) and uncinate (UF) fasciculi DTI measures. In left-ischemic stroke patients (n=16) recruited by the NatBrainLab, King’s College London, language recovery was longitudinally evaluated (6-months follow-up). Using arterial spin labelling imaging a significant correlation (p<0.01) between language recovery and cerebral blood flow asymmetry, was found in the middle cerebral artery perfusion, towards the right. In HC (n=29) recruited by the DIBINEM Functional MR Unit, University of Bologna, an along-tract algorithm was developed suitable for different tractography methods, using the Laplacian operator. A higher left superior temporal gyrus and precentral operculum AF connectivity was found (Talozzi L et al., 2018), and lateralized UF projections towards the left dorsal orbital cortex. In HC (n=50) recruited in the Human Connectome Project, a new tractography-driven approach was developed for left association fibres, using a principal component analysis. The first component discriminated cortical areas typically connected by the AF, suggesting a good discrimination of cortical areas sharing a similar connectivity pattern. The evaluation of morphological, microstructural and metabolic measures could be used as in-vivo biomarkers to monitor language impairment related to neurodegeneration or as surrogate of cognitive rehabilitation/interventional treatment efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, the spreading of the air pollution crisis enhanced by greenhouse gases emission is leading to the worsening of the global warming. In this context, the transportation sector plays a vital role, since it is responsible for a large part of carbon dioxide production. In order to address these issues, the present thesis deals with the development of advanced control strategies for the energy efficiency optimization of plug-in hybrid electric vehicles (PHEVs), supported by the prediction of future working conditions of the powertrain. In particular, a Dynamic Programming algorithm has been developed for the combined optimization of vehicle energy and battery thermal management. At this aim, the battery temperature and the battery cooling circuit control signal have been considered as an additional state and control variables, respectively. Moreover, an adaptive equivalent consumption minimization strategy (A-ECMS) has been modified to handle zero-emission zones, where engine propulsion is not allowed. Navigation data represent an essential element in the achievement of these tasks. With this aim, a novel simulation and testing environment has been developed during the PhD research activity, as an effective tool to retrieve routing information from map service providers via vehicle-to-everything connectivity. Comparisons between the developed and the reference strategies are made, as well, in order to assess their impact on the vehicle energy consumption. All the activities presented in this doctoral dissertation have been carried out at the Green Mobility Research Lab} (GMRL), a research center resulting from the partnership between the University of Bologna and FEV Italia s.r.l., which represents the industrial partner of the research project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reinforcement learning is a particular paradigm of machine learning that, recently, has proved times and times again to be a very effective and powerful approach. On the other hand, cryptography usually takes the opposite direction. While machine learning aims at analyzing data, cryptography aims at maintaining its privacy by hiding such data. However, the two techniques can be jointly used to create privacy preserving models, able to make inferences on the data without leaking sensitive information. Despite the numerous amount of studies performed on machine learning and cryptography, reinforcement learning in particular has never been applied to such cases before. Being able to successfully make use of reinforcement learning in an encrypted scenario would allow us to create an agent that efficiently controls a system without providing it with full knowledge of the environment it is operating in, leading the way to many possible use cases. Therefore, we have decided to apply the reinforcement learning paradigm to encrypted data. In this project we have applied one of the most well-known reinforcement learning algorithms, called Deep Q-Learning, to simple simulated environments and studied how the encryption affects the training performance of the agent, in order to see if it is still able to learn how to behave even when the input data is no longer readable by humans. The results of this work highlight that the agent is still able to learn with no issues whatsoever in small state spaces with non-secure encryptions, like AES in ECB mode. For fixed environments, it is also able to reach a suboptimal solution even in the presence of secure modes, like AES in CBC mode, showing a significant improvement with respect to a random agent; however, its ability to generalize in stochastic environments or big state spaces suffers greatly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Things (IoT) has grown rapidly in recent years, leading to an increased need for efficient and secure communication between connected devices. Wireless Sensor Networks (WSNs) are composed of small, low-power devices that are capable of sensing and exchanging data, and are often used in IoT applications. In addition, Mesh WSNs involve intermediate nodes forwarding data to ensure more robust communication. The integration of Unmanned Aerial Vehicles (UAVs) in Mesh WSNs has emerged as a promising solution for increasing the effectiveness of data collection, as UAVs can act as mobile relays, providing extended communication range and reducing energy consumption. However, the integration of UAVs and Mesh WSNs still poses new challenges, such as the design of efficient control and communication strategies. This thesis explores the networking capabilities of WSNs and investigates how the integration of UAVs can enhance their performance. The research focuses on three main objectives: (1) Ground Wireless Mesh Sensor Networks, (2) Aerial Wireless Mesh Sensor Networks, and (3) Ground/Aerial WMSN integration. For the first objective, we investigate the use of the Bluetooth Mesh standard for IoT monitoring in different environments. The second objective focuses on deploying aerial nodes to maximize data collection effectiveness and QoS of UAV-to-UAV links while maintaining the aerial mesh connectivity. The third objective investigates hybrid WMSN scenarios with air-to-ground communication links. One of the main contribution of the thesis consists in the design and implementation of a software framework called "Uhura", which enables the creation of Hybrid Wireless Mesh Sensor Networks and abstracts and handles multiple M2M communication stacks on both ground and aerial links. The operations of Uhura have been validated through simulations and small-scale testbeds involving ground and aerial devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, vehicle acoustics have gained significant importance in new car development: increasingly advanced infotainment systems for spatial audio and sound enhancement algorithms have become the norm in modern vehicles. In the past, car manufacturers had to build numerous prototypes to study the sound behaviour inside the car cabin or the effect of new algorithms under development. Nowadays, advanced simulation techniques can reduce development costs and time. In this work, after selecting the reference test vehicle, a modern luxury sedan equipped with a high-end sound system, two independent tools were developed: a simulation tool created in the Comsol Multiphysics environment and an auralization tool developed in the Cycling ‘74 MAX environment. The simulation tool can calculate the impulse response and acoustic spectrum at a specific position inside the cockpit. Its input data are the vehicle’s geometry, acoustic absorption parameters of materials, the acoustic characteristics and position of loudspeakers, and the type and position of virtual microphones (or microphone arrays). The simulation tool can also provide binaural impulse responses thanks to Head Related Transfer Functions (HRTFs) and an innovative algorithm able to compute the HRTF at any distance and angle from the head. Impulse responses from simulations or acoustic measurements inside the car cabin are processed and fed into the auralization tool, enabling real-time interaction by applying filters, changing the channels gain or displaying the acoustic spectrum. Since the acoustic simulation of a vehicle involves multiple topics, the focus of this work has not only been the development of two tools but also the study and application of new techniques for acoustic characterization of the materials that compose the cockpit and the loudspeaker simulation. Specifically, three different methods have been applied for material characterization through the use of a pressure-velocity probe, a Laser Doppler Vibrometer (LDV), and a microphone array.