873 resultados para exploit
Resumo:
Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.
Resumo:
Tissue engineering is a discipline that aims at regenerating damaged biological tissues by using a cell-construct engineered in vitro made of cells grown into a porous 3D scaffold. The role of the scaffold is to guide cell growth and differentiation by acting as a bioresorbable temporary substrate that will be eventually replaced by new tissue produced by cells. As a matter or fact, the obtainment of a successful engineered tissue requires a multidisciplinary approach that must integrate the basic principles of biology, engineering and material science. The present Ph.D. thesis aimed at developing and characterizing innovative polymeric bioresorbable scaffolds made of hydrolysable polyesters. The potentialities of both commercial polyesters (i.e. poly-e-caprolactone, polylactide and some lactide copolymers) and of non-commercial polyesters (i.e. poly-w-pentadecalactone and some of its copolymers) were explored and discussed. Two techniques were employed to fabricate scaffolds: supercritical carbon dioxide (scCO2) foaming and electrospinning (ES). The former is a powerful technology that enables to produce 3D microporous foams by avoiding the use of solvents that can be toxic to mammalian cells. The scCO2 process, which is commonly applied to amorphous polymers, was successfully modified to foam a highly crystalline poly(w-pentadecalactone-co-e-caprolactone) copolymer and the effect of process parameters on scaffold morphology and thermo-mechanical properties was investigated. In the course of the present research activity, sub-micrometric fibrous non-woven meshes were produced using ES technology. Electrospun materials are considered highly promising scaffolds because they resemble the 3D organization of native extra cellular matrix. A careful control of process parameters allowed to fabricate defect-free fibres with diameters ranging from hundreds of nanometers to several microns, having either smooth or porous surface. Moreover, versatility of ES technology enabled to produce electrospun scaffolds from different polyesters as well as “composite” non-woven meshes by concomitantly electrospinning different fibres in terms of both fibre morphology and polymer material. The 3D-architecture of the electrospun scaffolds fabricated in this research was controlled in terms of mutual fibre orientation by properly modifying the instrumental apparatus. This aspect is particularly interesting since the micro/nano-architecture of the scaffold is known to affect cell behaviour. Since last generation scaffolds are expected to induce specific cell response, the present research activity also explored the possibility to produce electrospun scaffolds bioactive towards cells. Bio-functionalized substrates were obtained by loading polymer fibres with growth factors (i.e. biomolecules that elicit specific cell behaviour) and it was demonstrated that, despite the high voltages applied during electrospinning, the growth factor retains its biological activity once released from the fibres upon contact with cell culture medium. A second fuctionalization approach aiming, at a final stage, at controlling cell adhesion on electrospun scaffolds, consisted in covering fibre surface with highly hydrophilic polymer brushes of glycerol monomethacrylate synthesized by Atom Transfer Radical Polymerization. Future investigations are going to exploit the hydroxyl groups of the polymer brushes for functionalizing the fibre surface with desired biomolecules. Electrospun scaffolds were employed in cell culture experiments performed in collaboration with biochemical laboratories aimed at evaluating the biocompatibility of new electrospun polymers and at investigating the effect of fibre orientation on cell behaviour. Moreover, at a preliminary stage, electrospun scaffolds were also cultured with tumour mammalian cells for developing in vitro tumour models aimed at better understanding the role of natural ECM on tumour malignity in vivo.
Resumo:
This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.
Resumo:
The development of safe, high energy and power electrochemical energy-conversion systems can be a response to the worldwide demand for a clean and low-fuel-consuming transport. This thesis work, starting from a basic studies on the ionic liquid (IL) electrolytes and carbon electrodes and concluding with tests on large-size IL-based supercapacitor prototypes demonstrated that the IL-based asymmetric configuration (AEDLCs) is a powerful strategy to develop safe, high-energy supercapacitors that might compete with lithium-ion batteries in power assist-hybrid electric vehicles (HEVs). The increase of specific energy in EDLCs was achieved following three routes: i) the use of hydrophobic ionic liquids (ILs) as electrolytes; ii) the design and preparation of carbon electrode materials of tailored morphology and surface chemistry to feature high capacitance response in IL and iii) the asymmetric double-layer carbon supercapacitor configuration (AEDLC) which consists of assembling the supercapacitor with different carbon loadings at the two electrodes in order to exploit the wide electrochemical stability window (ESW) of IL and to reach high maximum cell voltage (Vmax). Among the various ILs investigated the N-methoxyethyl-N-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide (PYR1(2O1)TFSI) was selected because of its hydrophobicity and high thermal stability up to 350 °C together with good conductivity and wide ESW, exploitable in a wide temperature range, below 0°C. For such exceptional properties PYR1(2O1)TFSI was used for the whole study to develop large size IL-based carbon supercapacitor prototype. This work also highlights that the use of ILs determines different chemical-physical properties at the interface electrode/electrolyte with respect to that formed by conventional electrolytes. Indeed, the absence of solvent in ILs makes the properties of the interface not mediated by the solvent and, thus, the dielectric constant and double-layer thickness strictly depend on the chemistry of the IL ions. The study of carbon electrode materials evidences several factors that have to be taken into account for designing performing carbon electrodes in IL. The heat-treatment in inert atmosphere of the activated carbon AC which gave ACT carbon featuring ca. 100 F/g in IL demonstrated the importance of surface chemistry in the capacitive response of the carbons in hydrophobic ILs. The tailored mesoporosity of the xerogel carbons is a key parameter to achieve high capacitance response. The CO2-treated xerogel carbon X3a featured a high specific capacitance of 120 F/g in PYR14TFSI, however, exhibiting high pore volume, an excess of IL is required to fill the pores with respect to that necessary for the charge-discharge process. Further advances were achieved with electrodes based on the disordered template carbon DTC7 with pore size distribution centred at 2.7 nm which featured a notably high specific capacitance of 140 F/g in PYR14TFSI and a moderate pore volume, V>1.5 nm of 0.70 cm3/g. This thesis work demonstrated that by means of the asymmetric configuration (AEDLC) it was possible to reach high cell voltage up to 3.9 V. Indeed, IL-based AEDLCs with the X3a or ACT carbon electrodes exhibited specific energy and power of ca. 30 Wh/kg and 10 kW/kg, respectively. The DTC7 carbon electrodes, featuring a capacitance response higher of 20%-40% than those of X3a and ACT, respectively, enabled the development of a PYR14TFSI-based AEDLC with specific energy and power of 47 Wh/kg and 13 kW/kg at 60°C with Vmax of 3.9 V. Given the availability of the ACT carbon (obtained from a commercial material), the PYR1(2O1)TFSI-based AEDLCs assembled with ACT carbon electrodes were selected within the EU ILHYPOS project for the development of large-size prototypes. This study demonstrated that PYR1(2O1)TFSI-based AEDLC can operate between -30°C and +60°C and its cycling stability was proved at 60°C up to 27,000 cycles with high Vmax up to 3.8 V. Such AEDLC was further investigated following USABC and DOE FreedomCAR reference protocols for HEV to evaluate its dynamic pulse-power and energy features. It was demonstrated that with Vmax of 3.7 V at T> 30 °C the challenging energy and power targets stated by DOE for power-assist HEVs, and at T> 0 °C the standards for the 12V-TSS and 42V-FSS and TPA 2s-pulse applications are satisfied, if the ratio wmodule/wSC = 2 is accomplished, which, however, is a very demanding condition. Finally, suggestions for further advances in IL-based AEDLC performance were found. Particularly, given that the main contribution to the ESR is the electrode charging resistance, which in turn is affected by the ionic resistance in the pores that is also modulated by pore length, the pore geometry is a key parameter in carbon design not only because it defines the carbon surface but also because it can differentially “amplify” the effect of IL conductivity on the electrode charging-discharging process and, thus, supercapacitor time constant.
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
The Adaptive Optics is the measurement and correction in real time of the wavefront aberration of the star light caused by the atmospheric turbulence, that limits the angular resolution of ground based telescopes and thus their capabilities to deep explore faint and crowded astronomical objects. The lack of natural stars enough bright to be used as reference sources for the Adaptive Optics, over a relevant fraction of the sky, led to the introduction of artificial reference stars. The so-called Laser Guide Stars are produced by exciting the Sodium atoms in a layer laying at 90km of altitude, by a powerful laser beam projected toward the sky. The possibility to turn on a reference star close to the scientific targets of interest has the drawback in an increased difficulty in the wavefront measuring, mainly due to the time instability of the Sodium layer density. These issues are increased with the telescope diameter. In view of the construction of the 42m diameter European Extremely Large Telescope a detailed investigation of the achievable performances of Adaptive Optics becomes mandatory to exploit its unique angular resolution . The goal of this Thesis was to present a complete description of a laboratory Prototype development simulating a Shack-Hartmann wavefront sensor using Laser Guide Stars as references, in the expected conditions for a 42m telescope. From the conceptual design, through the opto-mechanical design, to the Assembly, Integration and Test, all the phases of the Prototype construction are explained. The tests carried out shown the reliability of the images produced by the Prototype that agreed with the numerical simulations. For this reason some possible upgrades regarding the opto-mechanical design are presented, to extend the system functionalities and let the Prototype become a more complete test bench to simulate the performances and drive the future Adaptive Optics modules design.
Resumo:
This work presents hybrid Constraint Programming (CP) and metaheuristic methods for the solution of Large Scale Optimization Problems; it aims at integrating concepts and mechanisms from the metaheuristic methods to a CP-based tree search environment in order to exploit the advantages of both approaches. The modeling and solution of large scale combinatorial optimization problem is a topic which has arisen the interest of many researcherers in the Operations Research field; combinatorial optimization problems are widely spread in everyday life and the need of solving difficult problems is more and more urgent. Metaheuristic techniques have been developed in the last decades to effectively handle the approximate solution of combinatorial optimization problems; we will examine metaheuristics in detail, focusing on the common aspects of different techniques. Each metaheuristic approach possesses its own peculiarities in designing and guiding the solution process; our work aims at recognizing components which can be extracted from metaheuristic methods and re-used in different contexts. In particular we focus on the possibility of porting metaheuristic elements to constraint programming based environments, as constraint programming is able to deal with feasibility issues of optimization problems in a very effective manner. Moreover, CP offers a general paradigm which allows to easily model any type of problem and solve it with a problem-independent framework, differently from local search and metaheuristic methods which are highly problem specific. In this work we describe the implementation of the Local Branching framework, originally developed for Mixed Integer Programming, in a CP-based environment. Constraint programming specific features are used to ease the search process, still mantaining an absolute generality of the approach. We also propose a search strategy called Sliced Neighborhood Search, SNS, that iteratively explores slices of large neighborhoods of an incumbent solution by performing CP-based tree search and encloses concepts from metaheuristic techniques. SNS can be used as a stand alone search strategy, but it can alternatively be embedded in existing strategies as intensification and diversification mechanism. In particular we show its integration within the CP-based local branching. We provide an extensive experimental evaluation of the proposed approaches on instances of the Asymmetric Traveling Salesman Problem and of the Asymmetric Traveling Salesman Problem with Time Windows. The proposed approaches achieve good results on practical size problem, thus demonstrating the benefit of integrating metaheuristic concepts in CP-based frameworks.
Resumo:
The aim of the present thesis, carried out at the Analytical Group of the Faculty of Industrial Chemistry in Bologna, is to develop a new electrochemical method for the determination of the Antioxidant Capacity (AOC). The approach is based on the deposition of a non-conducting polymeric film on the working electrode surface and the following exposition to the radicals OH· produced by H2O2 photolysis. The strongly oxidant action of hydroxyl radicals degrades, causing an increase of the Faradic current, relevant to the redox couple [Ru(NH3)6]2+/3+ monitored by cyclic voltammetry(CV); the presence of an antioxidant compound in solution slows down the radical action, thus protecting the polymeric film and blocking the charge transfer. The parameter adopted for the quantification of the AOC, was the induction time, called also lag phase, which is the time when the degradation of the film starts. Five pure compounds, among most commonly antioxidant, were investigated : Trolox®(an analogue water-soluble of vitamin E), (L)-ascorbic acid, gallic acid, pyrogallol and (-)- epicatechin. The AOC of each antioxidant was expressed by TEAC index (Trolox® Equivalent Antioxidant Capacity), calculated from the ratio between the slope of the calibration curve of the target compound and the slope of the calibration curve of Trolox®. The results from the electrochemical method, have been compared with those obtained from some other standardized methods, widely employed. The assays used for the comparison, have been: ORAC, a spectrofluorimetric method based on the decrease of fluorescein emission after the attack of alkylperoxide radicals, ABTS and DPPH that exploit the decoloration of stable nitrogen radicals when they are reduced in presence of an antioxidant compound and, finally, a potentiometric method based on the response of the redox couple [Fe(CN)6]3-/ [Fe(CN)6]4-. From the results obtained from pure compounds, it has been found that ORAC is the methodology showing the best correlation with the developed electrochemical method, maybe since similar radical species are involved. The comparison between the considered assays, was also extended to the analysis of a real sample of fruit juice. In such a case the TEAC value resulting from the electrochemical method is higher than those from standardized assays.
Resumo:
Hydrogen peroxide (H2O2) is a powerful oxidant which is commonly used in a wide range of applications in the industrial field. Several methods for the quantification of H2O2 have been developed. Among them, electrochemical methods exploit the ability of some hexacyanoferrates (such as Prussian Blue) to detect H2O2 at potentials close to 0.0 V (vs. SCE) avoiding the occurrence of secondary reactions, which are likely to run at large overpotentials. This electrocatalytic behaviour makes hexacyanoferrates excellent redox mediators. When deposited in the form of thin films on the electrode surfaces, they can be employed in the fabrication of sensors and biosensors, normally operated in solutions at pH values close to physiological ones. As hexacyanoferrates show limited stability in not strongly acidic solutions, it is necessary to improve the configuration of the modified electrodes to increase the stability of the films. In this thesis work, organic conducting polymers were used to fabricate composite films with Prussian Blue (PB) to be electro-deposited on Pt surfaces, in order to increase their pH stability. Different electrode configurations and different methods of synthesis of both components were tested, and for each one the achievement of a possible increase in the operational stability of Prussian Blue was verified. Good results were obtained for the polymer 3,3''-didodecyl-2,2':5',2''-terthiophene (poly(3,3''-DDTT)), whose presence created a favourable microenvironment for the electrodeposition of Prussian Blue. The electrochemical behaviour of the modified electrodes was studied in both aqueous and organic solutions. Poly(3,3''-DDTT) showed no response in aqueous solution in the potential range where PB is electroactive, thus in buffered aqueous solution is was possible to characterize the composite material, focusing only on the redox behaviour of PB. A combined effect of anion and cation of the supporting electrolyte was noticed. The response of Pt electrodes modified with films of the PB /poly(3,3''-DDTT) composite was evaluated for the determination of H2O2. The performance of such films was found better than that of the PB alone. It can be concluded that poly(3,3''-DDTT) plays a key role in the stabilization of Prussian Blue causing also a wider linearity range for the electrocatalytic response to H2O2.