11 resultados para time-dependent network design
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
The Internet of Things (IoT) has grown rapidly in recent years, leading to an increased need for efficient and secure communication between connected devices. Wireless Sensor Networks (WSNs) are composed of small, low-power devices that are capable of sensing and exchanging data, and are often used in IoT applications. In addition, Mesh WSNs involve intermediate nodes forwarding data to ensure more robust communication. The integration of Unmanned Aerial Vehicles (UAVs) in Mesh WSNs has emerged as a promising solution for increasing the effectiveness of data collection, as UAVs can act as mobile relays, providing extended communication range and reducing energy consumption. However, the integration of UAVs and Mesh WSNs still poses new challenges, such as the design of efficient control and communication strategies. This thesis explores the networking capabilities of WSNs and investigates how the integration of UAVs can enhance their performance. The research focuses on three main objectives: (1) Ground Wireless Mesh Sensor Networks, (2) Aerial Wireless Mesh Sensor Networks, and (3) Ground/Aerial WMSN integration. For the first objective, we investigate the use of the Bluetooth Mesh standard for IoT monitoring in different environments. The second objective focuses on deploying aerial nodes to maximize data collection effectiveness and QoS of UAV-to-UAV links while maintaining the aerial mesh connectivity. The third objective investigates hybrid WMSN scenarios with air-to-ground communication links. One of the main contribution of the thesis consists in the design and implementation of a software framework called "Uhura", which enables the creation of Hybrid Wireless Mesh Sensor Networks and abstracts and handles multiple M2M communication stacks on both ground and aerial links. The operations of Uhura have been validated through simulations and small-scale testbeds involving ground and aerial devices.
Resumo:
In this thesis we focus on the analysis and interpretation of time dependent deformations recorded through different geodetic methods. Firstly, we apply a variational Bayesian Independent Component Analysis (vbICA) technique to GPS daily displacement solutions, to separate the postseismic deformation that followed the mainshocks of the 2016-2017 Central Italy seismic sequence from the other, hydrological, deformation sources. By interpreting the signal associated with the postseismic relaxation, we model an afterslip distribution on the faults involved by the mainshocks consistent with the co-seismic models available in literature. We find evidences of aseismic slip on the Paganica fault, responsible for the Mw 6.1 2009 L’Aquila earthquake, highlighting the importance of aseismic slip and static stress transfer to properly model the recurrence of earthquakes on nearby fault segments. We infer a possible viscoelastic relaxation of the lower crust as a contributing mechanism to the postseismic displacements. We highlight the importance of a proper separation of the hydrological signals for an accurate assessment of the tectonic processes, especially in cases of mm-scale deformations. Contextually, we provide a physical explanation to the ICs associated with the observed hydrological processes. In the second part of the thesis, we focus on strain data from Gladwin Tensor Strainmeters, working on the instruments deployed in Taiwan. We develop a novel approach, completely data driven, to calibrate these strainmeters. We carry out a joint analysis of geodetic (strainmeters, GPS and GRACE products) and hydrological (rain gauges and piezometers) data sets, to characterize the hydrological signals in Southern Taiwan. Lastly, we apply the calibration approach here proposed to the strainmeters recently installed in Central Italy. We provide, as an example, the detection of a storm that hit the Umbria-Marche regions (Italy), demonstrating the potential of strainmeters in following the dynamics of deformation processes with limited spatio-temporal signature
Resumo:
This artwork reports on two different projects that were carried out during the three years of Doctor of the Philosophy course. In the first years a project regarding Capacitive Pressure Sensors Array for Aerodynamic Applications was developed in the Applied Aerodynamic research team of the Second Faculty of Engineering, University of Bologna, Forlì, Italy, and in collaboration with the ARCES laboratories of the same university. Capacitive pressure sensors were designed and fabricated, investigating theoretically and experimentally the sensor’s mechanical and electrical behaviours by means of finite elements method simulations and by means of wind tunnel tests. During the design phase, the sensor figures of merit are considered and evaluated for specific aerodynamic applications. The aim of this work is the production of low cost MEMS-alternative devices suitable for a sensor network to be implemented in air data system. The last two year was dedicated to a project regarding Wireless Pressure Sensor Network for Nautical Applications. Aim of the developed sensor network is to sense the weak pressure field acting on the sail plan of a full batten sail by means of instrumented battens, providing a real time differential pressure map over the entire sail surface. The wireless sensor network and the sensing unit were designed, fabricated and tested in the faculty laboratories. A static non-linear coupled mechanical-electrostatic simulation, has been developed to predict the pressure versus capacitance static characteristic suitable for the transduction process and to tune the geometry of the transducer to reach the required resolution, sensitivity and time response in the appropriate full scale pressure input A time dependent viscoelastic error model has been inferred and developed by means of experimental data in order to model, predict and reduce the inaccuracy bound due to the viscolelastic phenomena affecting the Mylar® polyester film used for the sensor diaphragm. The development of the two above mentioned subjects are strictly related but presently separately in this artwork.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.
Resumo:
Fibre Reinforced Concretes are innovative composite materials whose applications are growing considerably nowadays. Being composite materials, their performance depends on the mechanical properties of both components, fibre and matrix and, above all, on the interface. The variables to account for the mechanical characterization of the material, could be proper of the material itself, i.e. fibre and concrete type, or external factors, i.e. environmental conditions. The first part of the research presented is focused on the experimental and numerical characterization of the interface properties and short term response of fibre reinforced concretes with macro-synthetic fibers. The experimental database produced represents the starting point for numerical models calibration and validation with two principal purposes: the calibration of a local constitutive law and calibration and validation of a model predictive of the whole material response. In the perspective of the design of sustainable admixtures, the optimization of the matrix of cement-based fibre reinforced composites is realized with partial substitution of the cement amount. In the second part of the research, the effect of time dependent phenomena on MSFRCs response is studied. An extended experimental campaign of creep tests is performed analysing the effect of time and temperature variations in different loading conditions. On the results achieved, a numerical model able to account for the viscoelastic nature of both concrete and reinforcement, together with the environmental conditions, is calibrated with the LDPM theory. Different type of regression models are also elaborated correlating the mechanical properties investigated, bond strength and residual flexural behaviour, regarding the short term analysis and creep coefficient on time, for the time dependent behaviour, with the variable investigated. The experimental studies carried out emphasize the several aspects influencing the material mechanical performance allowing also the identification of those properties that the numerical approach should consider in order to be reliable.
Resumo:
The aim of this thesis is to explore the possible influence of the food matrix on food quality attributes. Using nuclear magnetic resonance techniques, the matrix-dependent properties of different foods were studied and some useful indices were defined to classify food products based on the matrix behaviour when responding to processing phenomena. Correlations were found between fish freshness indices, assessed by certain geometric parameters linked to the morphology of the animal, i.e. a macroscopic structure, and the degradation of the product structure. The same foodomics approach was also applied to explore the protective effect of modified atmospheres on the stability of fish fillets, which are typically susceptible to oxidation of the polyunsaturated fatty acids incorporated in the meat matrix. Here, freshness is assessed by evaluating the time-dependent change in the fish metabolome, providing an established freshness index, and its relationship to lipid oxidation. In vitro digestion studies, focusing on food products with different matrixes, alone and in combination with other meal components (e.g. seasoning), were conducted to investigate possible interactions between enzymes and food, modulated by matrix structure, which influence digestibility. The interaction between water and the gelatinous matrix of the food, consisting of a network of protein gels incorporating fat droplets, was also studied by means of nuclear magnetic relaxometry, in order to create a prediction tool for the correct classification of authentic and counterfeit food products protected by a quality label. This is one of the first applications of an NMR method focusing on the supramolecular structure of the matrix, rather than the chemical composition, to assess food authenticity. The effect of innovative processing technologies, such as PEF applied to fruit products, has been assessed by magnetic resonance imaging, exploiting information associated with the rehydration kinetics exerted by a modified food structure.
Resumo:
The time-dependent CP asymmetries of the $B^0\to\pi^+\pi^-$ and $B^0_s\toK^+K^-$ decays and the time-integrated CP asymmetries of the $B^0\toK^+\pi^-$ and $B^0_s\to\pi^+K^-$ decays are measured, using the $p-p$ collision data collected with the LHCb detector and corresponding to the full Run2. The results are compatible with previous determinations of these quantities from LHCb, except for the CP-violation parameters of the $B^0_s\to K^+K^-$ decays, that show a discrepancy exceeding 3 standard deviations between different data-taking periods. The investigations being conducted to understand the discrepancy are documented. The measurement of the CKM matrix element $|V_{cb}|$ using $B^0_{s}\to D^{(*)-}_s\mu^+ \nu_\mu$ is also reported, using the $p-p$ collision data collected with the LHCb detector and corresponding to the full Run1. The measurement leads to $|V_{cb}| = (41.4\pm0.6\pm0.9\pm1.2)\times 10^{-3}$, where the first uncertainty is statistical, the second is systematic, and the third is due to external inputs. This measurement is compatible with the world averages and constitutes the first measurement of $|V_{cb}|$ at a hadron collider and the absolute first one with decays of the $B^0_s$ meson. The analysis also provides the very first measurements of the branching ratio and form factors parameters of the signal decay modes. The study of the characteristics ruling the response of an electromagnetic calorimeter (ECAL) to profitably operate in the high luminosity regime foreseen for the Upgrade2 of LHCb is reported in the final part of this Thesis. A fast and flexible simulation framework is developed to this purpose. Physics performance of different configurations of the ECAL are evaluated using samples of fully simulated $B^0\to \pi^+\pi^-\pi^0$ and $B^0\to K^{*0}e^+e^-$ decays. The results are used to guide the development of the future ECAL and are reported in the Framework Technical Design Report of the LHCb Upgrade2 detector.