27 resultados para Integration And Modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with optimization techniques and modeling of vehicular networks. Thanks to the models realized with the integer linear programming (ILP) and the heuristic ones, it was possible to study the performances in 5G networks for the vehicular. Thanks to Software-defined networking (SDN) and Network functions virtualization (NFV) paradigms it was possible to study the performances of different classes of service, such as the Ultra Reliable Low Latency Communications (URLLC) class and enhanced Mobile BroadBand (eMBB) class, and how the functional split can have positive effects on network resource management. Two different protection techniques have been studied: Shared Path Protection (SPP) and Dedicated Path Protection (DPP). Thanks to these different protections, it is possible to achieve different network reliability requirements, according to the needs of the end user. Finally, thanks to a simulator developed in Python, it was possible to study the dynamic allocation of resources in a 5G metro network. Through different provisioning algorithms and different dynamic resource management techniques, useful results have been obtained for understanding the needs in the vehicular networks that will exploit 5G. Finally, two models are shown for reconfiguring backup resources when using shared resource protection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, IoT technology has radically transformed many crucial industrial and service sectors such as healthcare. The multi-facets heterogeneity of the devices and the collected information provides important opportunities to develop innovative systems and services. However, the ubiquitous presence of data silos and the poor semantic interoperability in the IoT landscape constitute a significant obstacle in the pursuit of this goal. Moreover, achieving actionable knowledge from the collected data requires IoT information sources to be analysed using appropriate artificial intelligence techniques such as automated reasoning. In this thesis work, Semantic Web technologies have been investigated as an approach to address both the data integration and reasoning aspect in modern IoT systems. In particular, the contributions presented in this thesis are the following: (1) the IoT Fitness Ontology, an OWL ontology that has been developed in order to overcome the issue of data silos and enable semantic interoperability in the IoT fitness domain; (2) a Linked Open Data web portal for collecting and sharing IoT health datasets with the research community; (3) a novel methodology for embedding knowledge in rule-defined IoT smart home scenarios; and (4) a knowledge-based IoT home automation system that supports a seamless integration of heterogeneous devices and data sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectral sensors are a wide class of devices that are extremely useful for detecting essential information of the environment and materials with high degree of selectivity. Recently, they have achieved high degrees of integration and low implementation cost to be suited for fast, small, and non-invasive monitoring systems. However, the useful information is hidden in spectra and it is difficult to decode. So, mathematical algorithms are needed to infer the value of the variables of interest from the acquired data. Between the different families of predictive modeling, Principal Component Analysis and the techniques stemmed from it can provide very good performances, as well as small computational and memory requirements. For these reasons, they allow the implementation of the prediction even in embedded and autonomous devices. In this thesis, I will present 4 practical applications of these algorithms to the prediction of different variables: moisture of soil, moisture of concrete, freshness of anchovies/sardines, and concentration of gasses. In all of these cases, the workflow will be the same. Initially, an acquisition campaign was performed to acquire both spectra and the variables of interest from samples. Then these data are used as input for the creation of the prediction models, to solve both classification and regression problems. From these models, an array of calibration coefficients is derived and used for the implementation of the prediction in an embedded system. The presented results will show that this workflow was successfully applied to very different scientific fields, obtaining autonomous and non-invasive devices able to predict the value of physical parameters of choice from new spectral acquisitions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and hybridization frameworks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neuronal networks exhibit diverse types of plasticity, including the activity-dependent regulation of synaptic functions and refinement of synaptic connections. In addition, continuous generation of new neurons in the “adult” brain (adult neurogenesis) represents a powerful form of structural plasticity establishing new connections and possibly implementing pre-existing neuronal circuits (Kempermann et al, 2000; Ming and Song, 2005). Neurotrophins, a family of neuronal growth factors, are crucially involved in the modulation of activity-dependent neuronal plasticity. The first evidence for the physiological importance of this role evolved from the observations that the local administration of neurotrophins has dramatic effects on the activity-dependent refinement of synaptic connections in the visual cortex (McAllister et al, 1999; Berardi et al, 2000; Thoenen, 1995). Moreover, the local availability of critical amounts of neurotrophins appears to be relevant for the ability of hippocampal neurons to undergo long-term potentiation (LTP) of the synaptic transmission (Lu, 2004; Aicardi et al, 2004). To achieve a comprehensive understanding of the modulatory role of neurotrophins in integrated neuronal systems, informations on the mechanisms about local neurotrophins synthesis and secretion as well as ditribution of their cognate receptors are of crucial importance. In the first part of this doctoral thesis I have used electrophysiological approaches and real-time imaging tecniques to investigate additional features about the regulation of neurotrophins secretion, namely the capability of the neurotrophin brain-derived neurotrophic factor (BDNF) to undergo synaptic recycling. In cortical and hippocampal slices as well as in dissociated cell cultures, neuronal activity rapidly enhances the neuronal expression and secretion of BDNF which is subsequently taken up by neurons themselves but also by perineuronal astrocytes, through the selective activation of BDNF receptors. Moreover, internalized BDNF becomes part of the releasable source of the neurotrophin, which is promptly recruited for activity-dependent recycling. Thus, we described for the first time that neurons and astrocytes contain an endocytic compartment competent for BDNF recycling, suggesting a specialized form of bidirectional communication between neurons and glia. The mechanism of BDNF recycling is reminiscent of that for neurotransmitters and identifies BDNF as a new modulator implicated in neuro- and glio-transmission. In the second part of this doctoral thesis I addressed the role of BDNF signaling in adult hippocampal neurogenesis. I have generated a transgenic mouse model to specifically investigate the influence of BDNF signaling on the generation, differentiation, survival and connectivity of newborn neurons into the adult hippocampal network. I demonstrated that the survival of newborn neurons critically depends on the activation of the BDNF receptor TrkB. The TrkB-dependent decision regarding life or death in these newborn neurons takes place right at the transition point of their morphological and functional maturation Before newborn neurons start to die, they exhibit a drastic reduction in dendritic complexity and spine density compared to wild-type newborn neurons, indicating that this receptor is required for the connectivity of newborn neurons. Both the failure to become integrated and subsequent dying lead to impaired LTP. Finally, mice lacking a functional TrkB in the restricted population of newborn neurons show behavioral deficits, namely increased anxiety-like behavior. These data suggest that the integration and establishment of proper connections by newly generated neurons into the pre-existing network are relevant features for regulating the emotional state of the animal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Contemporary private law, in teh last few decades, TEMPhas been increasingly characterized by teh spread of general clauses and standards and by teh growing role of interpreters in teh framework of teh sources of law. dis process TEMPhas also consistently effected those systems dat are not typically centered on judge-made law. In particular in contract law general clauses and standards has assumed a leading role and has become protagonists of processes of integration and harmonization of teh law. Wifin dis context, teh reasonableness clause TEMPhas come to teh attention of scholars, emerging as a new element of connection between different legal systems -first of all between common law and civil law – and even between different legal traditions. dis research aims at reconstructing teh patterns of emersion and evolution of teh TEMPprincipal of reasonableness in contract law both wifin European Union Law and in teh Chinese legal system, in order to identify evolutionary trends, processes of emersion and circulation of legal models and teh scope of operation of teh TEMPprincipal in teh two contexts. In view of teh increasingly intense economic relations between Europe and China, wifin teh framework of teh new project called Belt and Road Initiative, a comparative survey of dis type can foster mutual understanding and make communications more TEMPeffective, at teh level of legal culture and commercial relations, and to support teh processes of supranational harmonization of contract law rules.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The field of bioelectronics involves the use of electrodes to exchange electrical signals with biological systems for diagnostic and therapeutic purposes in biomedical devices and healthcare applications. However, the mechanical compatibility of implantable devices with the human body has been a challenge, particularly with long-term implantation into target organs. Current rigid bioelectronics can trigger inflammatory responses and cause unstable device functions due to the mechanical mismatch with the surrounding soft tissue. Recent advances in flexible and stretchable electronics have shown promise in making bioelectronic interfaces more biocompatible. To fully achieve this goal, material science and engineering of soft electronic devices must be combined with quantitative characterization and modeling tools to understand the mechanical issues at the interface between electronic technology and biological tissue. Local mechanical characterization is crucial to understand the activation of failure mechanisms and optimizing the devices. Experimental techniques for testing mechanical properties at the nanoscale are emerging, and the Atomic Force Microscope (AFM) is a good candidate for in situ local mechanical characterization of soft bioelectronic interfaces. In this work, in situ experimental techniques with solely AFM supported by interpretive models for the characterization of planar and three-dimensional devices suitable for in vivo and in vitro biomedical experimentations are reported. The combination of the proposed models and experimental techniques provides access to the local mechanical properties of soft bioelectronic interfaces. The study investigates the nanomechanics of hard thin gold films on soft polymeric substrates (Poly(dimethylsiloxane) PDMS) and 3D inkjet-printed micropillars under different deformation states. The proposed characterization methods provide a rapid and precise determination of mechanical properties, thus giving the possibility to parametrize the microfabrication steps and investigate their impact on the final device.