10 resultados para Low Autocorrelation Binary Sequence Problem

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-pressure/high-temperature (LP/HT) metamorphic belts are characterised by rocks that experienced abnormal heat flow in shallow crustal levels (T > 600 °C; P < 4 kbar) resulting in anomalous geothermal gradients (60-150 °C/km). The abnormal amount of heat has been related to crustal underplating of mantle-derived basic magmas or to thermal perturbation linked to intrusion of large volumes of granitoids in the intermediate crust. In particular, in this latter context, magmatic or aqueous fluids are able to transport relevant amounts of heat by advection, thus favouring regional LP/HT metamorphism. However, the thermal perturbation consequent to heat released by cooling magmas is responsible also for contact metamorphic effects. A first problem is that time and space relationships between regional LP/HT metamorphism and contact metamorphism are usually unclear. A second problem is related to the high temperature conditions reached at different crustal levels. These, in some cases, can completely erase the previous metamorphic history. Notwithstanding this problem is very marked in lower crustal levels, petrologic and geochronologic studies usually concentrate in these attractive portions of the crust. However, only in the intermediate/upper-crustal levels of a LP/HT metamorphic belt the tectono-metamorphic events preceding the temperature peak, usually not preserved in the lower crustal portions, can be readily unravelled. The Hercynian Orogen of Western Europe is a well-documented example of a continental collision zone with widespread LP/HT metamorphism, intense crustal anatexis and granite magmatism. Owing to the exposure of a nearly continuous cross-section of the Hercynian continental crust, the Sila massif (northern Calabria) represents a favourable area to understand large-scale relationships between granitoids and LP/HT metamorphic rocks, and to discriminate regional LP/HT metamorphic events from contact metamorphic effects. Granulite-facies rocks of the lower crust and greenschist- to amphibolite-facies rocks of the intermediate-upper crust are separated by granitoids emplaced into the intermediate level during the late stages of the Hercynian orogeny. Up to now, advanced petrologic studies have been focused mostly in understanding P-T evolution of deeper crustal levels and magmatic bodies, whereas the metamorphic history of the shallower crustal levels is poorly constrained. The Hercynian upper crust exposed in Sila has been subdivided in two different metamorphic complexes by previous authors: the low- to very low-grade Bocchigliero complex and the greenschist- to amphibolite-facies Mandatoriccio complex. The latter contains favourable mineral assemblages in order to unravel the tectono-metamorphic evolution of the Hercynian upper crust. The Mandatoriccio complex consists mainly of metapelites, meta-arenites, acid metavolcanites and metabasites with rare intercalations of marbles and orthogneisses. Siliciclastic metasediments show a static porphyroblastic growth mainly of biotite, garnet, andalusite, staurolite and muscovite, whereas cordierite and fibrolite are less common. U-Pb ages and internal features of zircons suggest that the protoliths of the Mandatoriccio complex formed in a sedimentary basin filled by Cambrian to Silurian magmatic products as well as by siliciclastic sediments derived from older igneous and metamorphic rocks. In some localities, metamorphic rocks are injected by numerous aplite/pegmatite veins. Small granite bodies are also present and are always associated to spotted schists with large porphyroblasts. They occur along a NW-SE trending transcurrent cataclastic fault zone, which represents the tectonic contact between the Bocchigliero and the Mandatoriccio complexes. This cataclastic fault zone shows evidence of activity at least from middle-Miocene to Recent, indicating that brittle deformation post-dated the Hercynian orogeny. P-T pseudosections show that micaschists and paragneisses of the Mandatoriccio complex followed a clockwise P-T path characterised by four main prograde phases: thickening, peak-pressure condition, decompression and peak-temperature condition. During the thickening phase, garnet blastesis started up with spessartine-rich syntectonic core developed within micaschists and paragneisses. Coevally (340 ± 9.6 Ma), mafic sills and dykes injected the upper crustal volcaniclastic sedimentary sequence of the Mandatoriccio complex. After reaching the peak-pressure condition (≈4 kbar), the upper crust experienced a period of deformation quiescence marked by the static overgrowths of S2 by Almandine-rich-garnet rims and by porphyroblasts of biotite and staurolite. Probably, this metamorphic phase is related to isotherms relaxation after the thickening episode recorder by the Rb/Sr isotopic system (326 ± 6 Ma isochron age). The post-collisional period was mainly characterised by decompression with increasing temperature. This stage is documented by the andalusite+biotite coronas overgrown on staurolite porphyroblasts and represents a critical point of the metamorphic history, since metamorphic rocks begin to record a significant thermal perturbation. Peak-temperature conditions (≈620 °C) were reached at the end of this stage. They are well constrained by some reaction textures and mineral assemblages observed almost exclusively within paragneisses. The later appearance of fibrolitic sillimanite documents a small excursion of the P-T path across the And-Sil boundary due to the heating. Stephanian U-Pb ages of monazite crystals from the paragneiss, can be related to this heating phase. Similar monazite U-Pb ages from the micaschist combined with the lack of fibrolitic sillimanite suggest that, during the same thermal perturbation, micaschists recorded temperatures slightly lower than those reached by paragneisses. The metamorphic history ended with the crystallisation of cordierite mainly at the expense of andalusite. Consequently, the Ms+Bt+St+And+Sill+Crd mineral assemblage observed in the paragneisses is the result of a polyphasic evolution and is characterised by the metastable persistence of the staurolite in the stability fields of the cordierite. Geologic, geochronologic and petrographic data suggest that the thermal peak recorded by the intermediate/upper crust could be strictly connected with the emplacement of large amounts of granitoid magmas in the middle crust. Probably, the lithospheric extension in the relatively heated crust favoured ascent and emplacement of granitoids and further exhumation of metamorphic rocks. After a comparison among the tectono-metamorphic evolutions of the different Hercynian crustal levels exposed in Sila, it is concluded that the intermediate/upper crustal level offers the possibility to reconstruct a more detailed tectono-metamorphic history. The P-T paths proposed for the lower crustal levels probably underestimate the amount of the decompression. Apart from these considerations, the comparative analysis indicates that P-T paths at various crustal levels in the Sila cross section are well compatible with a unique geologic scenario, characterized by post-collisional extensional tectonics and magmas ascent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blue straggler stars (BSSs) are brighter and bluer (hotter) than the main-sequence (MS) turnoff and they are known to be more massive than MS stars.Two main scenarios for their formation have been proposed:collision-induced stellar mergers (COL-BSSs),or mass-transfer in binary systems (MT-BSSs).Depleted surface abundances of C and O are expected for MT-BSSs,whereas no chemical anomalies are predicted for COL-BSSs.Both MT- and COL-BSSs should rotate fast, but braking mechanisms may intervene with efficiencies and time-scales not well known yet,thus preventing a clear prediction of the expected rotational velocities.Within this context,an extensive survey is ongoing by using the multi-object spectrograph FLAMES@VLT,with the aim to obtain abundance patterns and rotational velocities for representative samples of BSSs in several Galactic GCs.A sub-population of CO-depleted BSSs has been identified in 47 Tuc,with only one fast rotating star detected.For this PhD Thesis work I analyzed FLAMES spectra of more than 130 BSSs in four GCs:M4,NGC 6397,M30 and ω Centauri.This is the largest sample of BSSs spectroscopically investigated so far.Hints of CO depletion have been observed in only 4-5 cases (in M30 and ω Centauri),suggesting either that the majority of BSSs have a collisional origin,or that the CO-depletion is a transient phenomenon.Unfortunately,no conclusions in terms of formation mechanism could be drawn in a large number of cases,because of the effects of radiative levitation. Remarkably,however,this is the first time that evidence of radiative levitation is found in BSSs hotter than 8200 K.Finally, we also discovered the largest fractions of fast rotating BSSs ever observed in any GCs:40% in M4 and 30% in ω Centauri.While not solving the problem of BSS formation,these results provide invaluable information about the BSS physical properties,which is crucial to build realistic models of their evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.