64 resultados para exploit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Adaptive Optics is the measurement and correction in real time of the wavefront aberration of the star light caused by the atmospheric turbulence, that limits the angular resolution of ground based telescopes and thus their capabilities to deep explore faint and crowded astronomical objects. The lack of natural stars enough bright to be used as reference sources for the Adaptive Optics, over a relevant fraction of the sky, led to the introduction of artificial reference stars. The so-called Laser Guide Stars are produced by exciting the Sodium atoms in a layer laying at 90km of altitude, by a powerful laser beam projected toward the sky. The possibility to turn on a reference star close to the scientific targets of interest has the drawback in an increased difficulty in the wavefront measuring, mainly due to the time instability of the Sodium layer density. These issues are increased with the telescope diameter. In view of the construction of the 42m diameter European Extremely Large Telescope a detailed investigation of the achievable performances of Adaptive Optics becomes mandatory to exploit its unique angular resolution . The goal of this Thesis was to present a complete description of a laboratory Prototype development simulating a Shack-Hartmann wavefront sensor using Laser Guide Stars as references, in the expected conditions for a 42m telescope. From the conceptual design, through the opto-mechanical design, to the Assembly, Integration and Test, all the phases of the Prototype construction are explained. The tests carried out shown the reliability of the images produced by the Prototype that agreed with the numerical simulations. For this reason some possible upgrades regarding the opto-mechanical design are presented, to extend the system functionalities and let the Prototype become a more complete test bench to simulate the performances and drive the future Adaptive Optics modules design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines the literature on local home bias, i.e. investor preference towards geographically nearby stocks, and investigates the role of firm’s visibility, profitability, and opacity in explaining such behavior. While firm’s visibility is expected to proxy for the behavioral root originating such a preference, firm’s profitability and opacity are expected to capture the informational one. I find that less visible, and more profitable and opaque firms, conditionally to the demand, benefit from being headquartered in regions characterized by a scarcity of listed firms (local supply of stocks). Specifically, research estimates suggest that firms headquartered in regions with a poor supply of stocks would be worth i) 11 percent more if non-visible, non-profitable and non-opaque; ii) 16 percent more if profitable; and iii) 28 percent more if both profitable and opaque. Overall, as these features are able to explain most, albeit not all, of the local home bias effect, I reasonably argue and then assess that most of the preference for local is determined by a successful attempt to exploit local information advantage (60 percent), while the rest is determined by a mere (irrational) feeling of familiarity with the local firm (40 percent). Several and significant methodological, theoretical, and practical implications come out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis reports on car fluff management, recycling and recovery. Car fluff is the residual waste produced by car recycling operations, particularly from hulk shredding. Car fluff is known also as Automotive Shredder Residue (ASR) and it is made of plastics, rubbers, textiles, metals and other materials, and it is very heterogeneous both in its composition and in its particle size. In fact, fines may amount to about 50%, making difficult to sort out recyclable materials or exploit ASR heat value by energy recovery. This 3 years long study started with the definition of the Italian End-of-Life Vehicles (ELVs) recycling state of the art. A national recycling trial revealed Italian recycling rate to be around 81% in 2008, while European Community recycling target are set to 85% by 2015. Consequently, according to Industrial Ecology framework, a life cycle assessment (LCA) has been conducted revealing that sorting and recycling polymers and metals contained in car fluff, followed by recovering residual energy, is the route which has the best environmental perspective. This results led the second year investigation that involved pyrolysis trials on pretreated ASR fractions aimed at investigating which processes could be suitable for an industrial scale ASR treatment plant. Sieving followed by floatation reported good result in thermochemical conversion of polymers with polyolefins giving excellent conversion rate. This factor triggered ecodesign considerations. Ecodesign, together with LCA, is one of the Industrial Ecology pillars and it consists of design for recycling and design for disassembly, both aimed at the improvement of car components dismantling speed and the substitution of non recyclable material. Finally, during the last year, innovative plants and technologies for metals recovery from car fluff have been visited and tested worldwide in order to design a new car fluff treatment plant aimed at ASR energy and material recovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’utilizzo del reservoir geotermico superficiale a scopi termici / frigoriferi è una tecnica consolidata che permette di sfruttare, tramite appositi “geoscambiatori”, un’energia presente ovunque ed inesauribile, ad un ridotto prezzo in termini di emissioni climalteranti. Pertanto, il pieno sfruttamento di questa risorsa è in linea con gli obiettivi del Protocollo di Kyoto ed è descritto nella Direttiva Europea 2009/28/CE (Comunemente detta: Direttiva Rinnovabili). Considerato il notevole potenziale a fronte di costi sostenibili di installazione ed esercizio, la geotermia superficiale è stata sfruttata già dalla metà del ventesimo secolo in diversi contesti (geografici, geologici e climatici) e per diverse applicazioni (residenziali, commerciali, industriali, infrastrutturali). Ciononostante, solo a partire dagli anni 2000 la comunità scientifica e il mercato si sono realmente interessati ed affacciati all’argomento, a seguito di sopraggiunte condizioni economiche e tecniche. Una semplice ed immediata dimostrazione di ciò si ritrova nel fatto che al 2012 non esiste ancora un chiaro riferimento tecnico condiviso a livello internazionale, né per la progettazione, né per l’installazione, né per il testing delle diverse applicazioni della geotermia superficiale, questo a fronte di una moltitudine di articoli scientifici pubblicati, impianti realizzati ed associazioni di categoria coinvolte nel primo decennio del ventunesimo secolo. Il presente lavoro di ricerca si colloca all’interno di questo quadro. In particolare verranno mostrati i progressi della ricerca svolta all’interno del Dipartimento di Ingegneria Civile, Ambientale e dei Materiali nei settori della progettazione e del testing dei sistemi geotermici, nonché verranno descritte alcune tipologie di geoscambiatori innovative studiate, analizzate e testate nel periodo di ricerca.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work, we apply both traditional and Next Generation Sequencing (NGS) tools to investigate some of the most important adaptive traits of wolves (Canis lupus). In the first part, we analyze the variability of three Major Histocompatibility Complex (MHC) class II genes in the Italian wolf population, also studying their possible role in mating choice and their influence on fitness traits. In the second section, as part of a larger canid genome project, we will exploit NGS data to investigate the transcript-level differences between the wolf and the dog genome that can be correlated to domestication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase in environmental and healthy concerns, combined with the possibility to exploit waste as a valuable energy resource, has led to explore alternative methods for waste final disposal. In this context, the energy conversion of Municipal Solid Waste (MSW) in Waste-To-Energy (WTE) power plant is increasing throughout Europe, both in terms of plants number and capacity, furthered by legislative directives. Due to the heterogeneous nature of waste, some differences with respect to a conventional fossil fuel power plant have to be considered in the energy conversion process. In fact, as a consequence of the well-known corrosion problems, the thermodynamic efficiency of WTE power plants typically ranging in the interval 25% ÷ 30%. The new Waste Framework Directive 2008/98/EC promotes production of energy from waste introducing an energy efficiency criteria (the so-called “R1 formula”) to evaluate plant recovery status. The aim of the Directive is to drive WTE facilities to maximize energy recovery and utilization of waste heat, in order to substitute energy produced with conventional fossil fuels fired power plants. This calls for novel approaches and possibilities to maximize the conversion of MSW into energy. In particular, the idea of an integrated configuration made up of a WTE and a Gas Turbine (GT) originates, driven by the desire to eliminate or, at least, mitigate limitations affecting the WTE conversion process bounding the thermodynamic efficiency of the cycle. The aim of this Ph.D thesis is to investigate, from a thermodynamic point of view, the integrated WTE-GT system sharing the steam cycle, sharing the flue gas paths or combining both ways. The carried out analysis investigates and defines the logic governing plants match in terms of steam production and steam turbine power output as function of the thermal powers introduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'obiettivo principale della tesi è lo sviluppo di un modello empirico previsivo di breve periodo che sia in grado di offrire previsioni precise ed affidabili dei consumi di energia elettrica su base oraria del mercato italiano. Questo modello riassume le conoscenze acquisite e l'esperienza fatta durante la mia attuale attività lavorativa presso il Romagna Energia S.C.p.A., uno dei maggiori player italiani del mercato energetico. Durante l'ultimo ventennio vi sono stati drastici cambiamenti alla struttura del mercato elettrico in tutto il mondo. Nella maggior parte dei paesi industrializzati il settore dell'energia elettrica ha modificato la sua originale conformazione di monopolio in mercato competitivo liberalizzato, dove i consumatori hanno la libertà di scegliere il proprio fornitore. La modellazione e la previsione della serie storica dei consumi di energia elettrica hanno quindi assunto un ruolo molto importante nel mercato, sia per i policy makers che per gli operatori. Basandosi sulla letteratura già esistente, sfruttando le conoscenze acquisite 'sul campo' ed alcune intuizioni, si è analizzata e sviluppata una struttura modellistica di tipo triangolare, del tutto innovativa in questo ambito di ricerca, suggerita proprio dal meccanismo fisico attraverso il quale l'energia elettrica viene prodotta e consumata nell'arco delle 24 ore. Questo schema triangolare può essere visto come un particolare modello VARMA e possiede una duplice utilità, dal punto di vista interpretativo del fenomeno da una parte, e previsivo dall'altra. Vengono inoltre introdotti nuovi leading indicators legati a fattori meteorologici, con l'intento di migliorare le performance previsive dello stesso. Utilizzando quindi la serie storica dei consumi di energia elettrica italiana, dall'1 Marzo 2010 al 30 Marzo 2012, sono stati stimati i parametri del modello dello schema previsivo proposto e valutati i risultati previsivi per il periodo dall'1 Aprile 2012 al 30 Aprile 2012, confrontandoli con quelli forniti da fonti ufficiali.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our view of Globular Clusters has deeply changed in the last decade. Modern spectroscopic and photometric data have conclusively established that globulars are neither coeval nor monometallic, reopening the issue of the formation of such systems. Their formation is now schematized as a two-step process, during which the polluted matter from the more massive stars of a first generation gives birth, in the cluster innermost regions, to a second generation of stars with the characteristic signature of fully CNO-processed matter. To date, star-to-star variations in abundances of the light elements (C, N, O, Na) have been observed in stars of all evolutionary phases in all properly studied Galactic globular clusters. Multiple or broad evolutionary sequences have also been observed in nearly all the clusters that have been observed with good signal-to-noise in the appropriate photometric bands. The body of evidence suggests that spreads in light-element abundances can be fairly well traced by photometric indices including near ultraviolet passbands, as CNO abundance variations affect mainly wavelengths shorter than ~400 nm owing to the rise of some NH and CN molecular absorption bands. Here, we exploit this property of near ultraviolet photometry to trace internal chemical variations and combined it with low resolution spectroscopy aimed to derive carbon and nitrogen abundances in order to maximize the information on the multiple populations. This approach has been proven to be very effective in (i) detecting multiple population, (ii) characterizing their global properties (i.e., relative fraction of stars, location in the color-magnitude diagram, spatial distribution, and trends with cluster parameters) and (iii) precisely tagging their chemical properties (i.e., extension of the C-N anticorrelation, bimodalities in the N content).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis was to synthesize multipotent drugs for the treatment of Alzheimer’s disease (AD) and for benign prostatic hyperplasia (BPH), two diseases that affect the elderly. AD is a neurodegenerative disorder that is characterized, among other factors, by loss of cholinergic neurons. Selective activation of M1 receptors through an allosteric site could restore the cholinergic hypofunction, improving the cognition in AD patients. We describe here the discovery and SAR of a novel series of quinone derivatives. Among them, 1 was the most interesting, being a high M1 selective positive allosteric modulator. At 100 nM, 1 triplicated the production of cAMP induced by oxotremorine. Moreover, it inhibited AChE and it displayed antioxidant properties. Site-directed mutagenesis experiments indicated that 1 acts at an allosteric site involving residue F77. Thus, 1 is a promising drug because the M1 activation may offer disease-modifying properties that could address and reduce most of AD hallmarks. BPH is an enlargement of the prostate caused by increased cellular growth. Blockade of α1-ARs is the predominant form of medical therapy for the treatment of the symptoms associated with BPH. α1-ARs are classified into three subtypes. The α1A- and α1D-AR subtypes are predominant in the prostate, while α1B-ARs regulate the blood pressure. Herein, we report the synthesis of quinazoline-derivatives obtained replacing the piperazine ring of doxazosin and prazosin with (S)- or (R)-3-aminopiperidine. The presence of a chiral center in the 3-C position of the piperidine ring allowed us to exploit the importance of stereochemistry in the binding at α1-ARs. It turned out that the S configuration at the 3-C position of the piperidine increases the affinity of the compounds at all three α1-AR subtypes, whereas the configuration at the benzodioxole ring of doxazosin derivatives is not critical for the interaction with α1-ARs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).