64 resultados para exploit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, in Ubiquitous computing scenarios users more and more require to exploit online contents and services by means of any device at hand, no matter their physical location, and by personalizing and tailoring content and service access to their own requirements. The coordinated provisioning of content tailored to user context and preferences, and the support for mobile multimodal and multichannel interactions are of paramount importance in providing users with a truly effective Ubiquitous support. However, so far the intrinsic heterogeneity and the lack of an integrated approach led to several either too vertical, or practically unusable proposals, thus resulting in poor and non-versatile support platforms for Ubiquitous computing. This work investigates and promotes design principles to help cope with these ever-changing and inherently dynamic scenarios. By following the outlined principles, we have designed and implemented a middleware support platform to support the provisioning of Ubiquitous mobile services and contents. To prove the viability of our approach, we have realized and stressed on top of our support platform a number of different, extremely complex and heterogeneous content and service provisioning scenarios. The encouraging results obtained are pushing our research work further, in order to provide a dynamic platform that is able to not only dynamically support novel Ubiquitous applicative scenarios by tailoring extremely diverse services and contents to heterogeneous user needs, but is also able to reconfigure and adapt itself in order to provide a truly optimized and tailored support for Ubiquitous service provisioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the birth of the European Union on 1957, the development of a single market through the integration of national freight transport networks has been one of the most important points in the European Union agenda. Increasingly congested motorways, rising oil prices and concerns about environment and climate change require the optimization of transport systems and transport processes. The best solution should be the intermodal transport, in which the most efficient transport options are used for the different legs of transport. This thesis examines the problem of defining innovative strategies and procedures for the sustainable development of intermodal freight transport in Europe. In particular, the role of maritime transport and railway transport in the intermodal chain are examined in depth, as these modes are recognized to be environmentally friendly and energy efficient. Maritime transport is the only mode that has kept pace with the fast growth in road transport, but it is necessary to promote the full exploitation of it by involving short sea shipping as an integrated service in the intermodal door-to-door supply chain and by improving port accessibility. The role of Motorways of the Sea services as part of the Trans-European Transport Network is is taken into account: a picture of the European policy and a state of the art of the Italian Motorways of the Sea system are reported. Afterwards, the focus shifts from line to node problems: the role of intermodal railway terminals in the transport chain is discussed. In particular, the last mile process is taken into account, as it is crucial in order to exploit the full capacity of an intermodal terminal. The difference between the present last mile planning models of Bologna Interporto and Verona Quadrante Europa is described and discussed. Finally, a new approach to railway intermodal terminal planning and management is introduced, by describing the case of "Terminal Gate" at Verona Quadrante Europa. Some proposals to favour the integrate management of "Terminal Gate" and the allocation of its capacity are drawn up.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The irrigation scheme Eduardo Mondlane, situated in Chókwè District - in the Southern part of the Gaza province and within the Limpopo River Basin - is the largest in the country, covering approximately 30,000 hectares of land. Built by the Portuguese colonial administration in the 1950s to exploit the agricultural potential of the area through cash-cropping, after Independence it became one of Frelimo’s flagship projects aiming at the “socialization of the countryside” and at agricultural economic development through the creation of a state farm and of several cooperatives. The failure of Frelimo’s economic reforms, several infrastructural constraints and local farmers resistance to collective forms of production led to scheme to a state of severe degradation aggravated by the floods of the year 2000. A project of technical rehabilitation initiated after the floods is currently accompanied by a strong “efficiency” discourse from the managing institution that strongly opposes the use of irrigated land for subsistence agriculture, historically a major livelihood strategy for smallfarmers, particularly for women. In fact, the area has been characterized, since the end of the XIX century, by a stable pattern of male migration towards South African mines, that has resulted in an a steady increase of women-headed households (both de jure and de facto). The relationship between land reform, agricultural development, poverty alleviation and gender equality in Southern Africa is long debated in academic literature. Within this debate, the role of agricultural activities in irrigation schemes is particularly interesting considering that, in a drought-prone area, having access to water for irrigation means increased possibilities of improving food and livelihood security, and income levels. In the case of Chókwè, local governments institutions are endorsing the development of commercial agriculture through initiatives such as partnerships with international cooperation agencies or joint-ventures with private investors. While these business models can sometimes lead to positive outcomes in terms of poverty alleviation, it is important to recognize that decentralization and neoliberal reforms occur in the context of financial and political crisis of the State that lacks the resources to efficiently manage infrastructures such as irrigation systems. This kind of institutional and economic reforms risk accelerating processes of social and economic marginalisation, including landlessness, in particular for poor rural women that mainly use irrigated land for subsistence production. The study combines an analysis of the historical and geographical context with the study of relevant literature and original fieldwork. Fieldwork was conducted between February and June 2007 (where I mainly collected secondary data, maps and statistics and conducted preliminary visit to Chókwè) and from October 2007 to March 2008. Fieldwork methodology was qualitative and used semi-structured interviews with central and local Government officials, technical experts of the irrigation scheme, civil society organisations, international NGOs, rural extensionists, and water users from the irrigation scheme, in particular those women smallfarmers members of local farmers’ associations. Thanks to the collaboration with the Union of Farmers’ Associations of Chókwè, she has been able to participate to members’ meeting, to education and training activities addressed to women farmers members of the Union and to organize a group discussion. In Chókwè irrigation scheme, women account for the 32% of water users of the familiar sector (comprising plot-holders with less than 5 hectares of land) and for just 5% of the private sector. If one considers farmers’ associations of the familiar sector (a legacy of Frelimo’s cooperatives), women are 84% of total members. However, the security given to them by the land title that they have acquired through occupation is severely endangered by the use that they make of land, that is considered as “non efficient” by the irrigation scheme authority. Due to a reduced access to marketing possibilities and to inputs, training, information and credit women, in actual fact, risk to see their right to access land and water revoked because they are not able to sustain the increasing cost of the water fee. The myth of the “efficient producer” does not take into consideration the characteristics of inequality and gender discrimination of the neo-liberal market. Expecting small-farmers, and in particular women, to be able to compete in the globalized agricultural market seems unrealistic, and can perpetuate unequal gendered access to resources such as land and water.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Higher-order process calculi are formalisms for concurrency in which processes can be passed around in communications. Higher-order (or process-passing) concurrency is often presented as an alternative paradigm to the first order (or name-passing) concurrency of the pi-calculus for the description of mobile systems. These calculi are inspired by, and formally close to, the lambda-calculus, whose basic computational step ---beta-reduction--- involves term instantiation. The theory of higher-order process calculi is more complex than that of first-order process calculi. This shows up in, for instance, the definition of behavioral equivalences. A long-standing approach to overcome this burden is to define encodings of higher-order processes into a first-order setting, so as to transfer the theory of the first-order paradigm to the higher-order one. While satisfactory in the case of calculi with basic (higher-order) primitives, this indirect approach falls short in the case of higher-order process calculi featuring constructs for phenomena such as, e.g., localities and dynamic system reconfiguration, which are frequent in modern distributed systems. Indeed, for higher-order process calculi involving little more than traditional process communication, encodings into some first-order language are difficult to handle or do not exist. We then observe that foundational studies for higher-order process calculi must be carried out directly on them and exploit their peculiarities. This dissertation contributes to such foundational studies for higher-order process calculi. We concentrate on two closely interwoven issues in process calculi: expressiveness and decidability. Surprisingly, these issues have been little explored in the higher-order setting. Our research is centered around a core calculus for higher-order concurrency in which only the operators strictly necessary to obtain higher-order communication are retained. We develop the basic theory of this core calculus and rely on it to study the expressive power of issues universally accepted as basic in process calculi, namely synchrony, forwarding, and polyadic communication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis discusses the rationale for design and use of synthetic oligosaccharides for the development of glycoconjugate vaccines and the role of physicochemical methods in the characterization of these vaccines. The study concerns two infectious diseases that represent a serious problem for the national healthcare programs: human immunodeficiency virus (HIV) and Group A Streptococcus (GAS) infections. Both pathogens possess distinctive carbohydrate structures that have been described as suitable targets for the vaccine design. The Group A Streptococcus cell membrane polysaccharide (GAS-PS) is an attractive vaccine antigen candidate based on its conserved, constant expression pattern and the ability to confer immunoprotection in a relevant mouse model. Analysis of the immunogenic response within at-risk populations suggests an inverse correlation between high anti-GAS-PS antibody titres and GAS infection cases. Recent studies show that a chemically synthesized core polysaccharide-based antigen may represent an antigenic structural determinant of the large polysaccharide. Based on GAS-PS structural analysis, the study evaluates the potential to exploit a synthetic design approach to GAS vaccine development and compares the efficiency of synthetic antigens with the long isolated GAS polysaccharide. Synthetic GAS-PS structural analogues were specifically designed and generated to explore the impact of antigen length and terminal residue composition. For the HIV-1 glycoantigens, the dense glycan shield on the surface of the envelope protein gp120 was chosen as a target. This shield masks conserved protein epitopes and facilitates virus spread via binding to glycan receptors on susceptible host cells. The broadly neutralizing monoclonal antibody 2G12 binds a cluster of high-mannose oligosaccharides on the gp120 subunit of HIV-1 Env protein. This oligomannose epitope has been a subject to the synthetic vaccine development. The cluster nature of the 2G12 epitope suggested that multivalent antigen presentation was important to develop a carbohydrate based vaccine candidate. I describe the development of neoglycoconjugates displaying clustered HIV-1 related oligomannose carbohydrates and their immunogenic properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tissue engineering is a discipline that aims at regenerating damaged biological tissues by using a cell-construct engineered in vitro made of cells grown into a porous 3D scaffold. The role of the scaffold is to guide cell growth and differentiation by acting as a bioresorbable temporary substrate that will be eventually replaced by new tissue produced by cells. As a matter or fact, the obtainment of a successful engineered tissue requires a multidisciplinary approach that must integrate the basic principles of biology, engineering and material science. The present Ph.D. thesis aimed at developing and characterizing innovative polymeric bioresorbable scaffolds made of hydrolysable polyesters. The potentialities of both commercial polyesters (i.e. poly-e-caprolactone, polylactide and some lactide copolymers) and of non-commercial polyesters (i.e. poly-w-pentadecalactone and some of its copolymers) were explored and discussed. Two techniques were employed to fabricate scaffolds: supercritical carbon dioxide (scCO2) foaming and electrospinning (ES). The former is a powerful technology that enables to produce 3D microporous foams by avoiding the use of solvents that can be toxic to mammalian cells. The scCO2 process, which is commonly applied to amorphous polymers, was successfully modified to foam a highly crystalline poly(w-pentadecalactone-co-e-caprolactone) copolymer and the effect of process parameters on scaffold morphology and thermo-mechanical properties was investigated. In the course of the present research activity, sub-micrometric fibrous non-woven meshes were produced using ES technology. Electrospun materials are considered highly promising scaffolds because they resemble the 3D organization of native extra cellular matrix. A careful control of process parameters allowed to fabricate defect-free fibres with diameters ranging from hundreds of nanometers to several microns, having either smooth or porous surface. Moreover, versatility of ES technology enabled to produce electrospun scaffolds from different polyesters as well as “composite” non-woven meshes by concomitantly electrospinning different fibres in terms of both fibre morphology and polymer material. The 3D-architecture of the electrospun scaffolds fabricated in this research was controlled in terms of mutual fibre orientation by properly modifying the instrumental apparatus. This aspect is particularly interesting since the micro/nano-architecture of the scaffold is known to affect cell behaviour. Since last generation scaffolds are expected to induce specific cell response, the present research activity also explored the possibility to produce electrospun scaffolds bioactive towards cells. Bio-functionalized substrates were obtained by loading polymer fibres with growth factors (i.e. biomolecules that elicit specific cell behaviour) and it was demonstrated that, despite the high voltages applied during electrospinning, the growth factor retains its biological activity once released from the fibres upon contact with cell culture medium. A second fuctionalization approach aiming, at a final stage, at controlling cell adhesion on electrospun scaffolds, consisted in covering fibre surface with highly hydrophilic polymer brushes of glycerol monomethacrylate synthesized by Atom Transfer Radical Polymerization. Future investigations are going to exploit the hydroxyl groups of the polymer brushes for functionalizing the fibre surface with desired biomolecules. Electrospun scaffolds were employed in cell culture experiments performed in collaboration with biochemical laboratories aimed at evaluating the biocompatibility of new electrospun polymers and at investigating the effect of fibre orientation on cell behaviour. Moreover, at a preliminary stage, electrospun scaffolds were also cultured with tumour mammalian cells for developing in vitro tumour models aimed at better understanding the role of natural ECM on tumour malignity in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of safe, high energy and power electrochemical energy-conversion systems can be a response to the worldwide demand for a clean and low-fuel-consuming transport. This thesis work, starting from a basic studies on the ionic liquid (IL) electrolytes and carbon electrodes and concluding with tests on large-size IL-based supercapacitor prototypes demonstrated that the IL-based asymmetric configuration (AEDLCs) is a powerful strategy to develop safe, high-energy supercapacitors that might compete with lithium-ion batteries in power assist-hybrid electric vehicles (HEVs). The increase of specific energy in EDLCs was achieved following three routes: i) the use of hydrophobic ionic liquids (ILs) as electrolytes; ii) the design and preparation of carbon electrode materials of tailored morphology and surface chemistry to feature high capacitance response in IL and iii) the asymmetric double-layer carbon supercapacitor configuration (AEDLC) which consists of assembling the supercapacitor with different carbon loadings at the two electrodes in order to exploit the wide electrochemical stability window (ESW) of IL and to reach high maximum cell voltage (Vmax). Among the various ILs investigated the N-methoxyethyl-N-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide (PYR1(2O1)TFSI) was selected because of its hydrophobicity and high thermal stability up to 350 °C together with good conductivity and wide ESW, exploitable in a wide temperature range, below 0°C. For such exceptional properties PYR1(2O1)TFSI was used for the whole study to develop large size IL-based carbon supercapacitor prototype. This work also highlights that the use of ILs determines different chemical-physical properties at the interface electrode/electrolyte with respect to that formed by conventional electrolytes. Indeed, the absence of solvent in ILs makes the properties of the interface not mediated by the solvent and, thus, the dielectric constant and double-layer thickness strictly depend on the chemistry of the IL ions. The study of carbon electrode materials evidences several factors that have to be taken into account for designing performing carbon electrodes in IL. The heat-treatment in inert atmosphere of the activated carbon AC which gave ACT carbon featuring ca. 100 F/g in IL demonstrated the importance of surface chemistry in the capacitive response of the carbons in hydrophobic ILs. The tailored mesoporosity of the xerogel carbons is a key parameter to achieve high capacitance response. The CO2-treated xerogel carbon X3a featured a high specific capacitance of 120 F/g in PYR14TFSI, however, exhibiting high pore volume, an excess of IL is required to fill the pores with respect to that necessary for the charge-discharge process. Further advances were achieved with electrodes based on the disordered template carbon DTC7 with pore size distribution centred at 2.7 nm which featured a notably high specific capacitance of 140 F/g in PYR14TFSI and a moderate pore volume, V>1.5 nm of 0.70 cm3/g. This thesis work demonstrated that by means of the asymmetric configuration (AEDLC) it was possible to reach high cell voltage up to 3.9 V. Indeed, IL-based AEDLCs with the X3a or ACT carbon electrodes exhibited specific energy and power of ca. 30 Wh/kg and 10 kW/kg, respectively. The DTC7 carbon electrodes, featuring a capacitance response higher of 20%-40% than those of X3a and ACT, respectively, enabled the development of a PYR14TFSI-based AEDLC with specific energy and power of 47 Wh/kg and 13 kW/kg at 60°C with Vmax of 3.9 V. Given the availability of the ACT carbon (obtained from a commercial material), the PYR1(2O1)TFSI-based AEDLCs assembled with ACT carbon electrodes were selected within the EU ILHYPOS project for the development of large-size prototypes. This study demonstrated that PYR1(2O1)TFSI-based AEDLC can operate between -30°C and +60°C and its cycling stability was proved at 60°C up to 27,000 cycles with high Vmax up to 3.8 V. Such AEDLC was further investigated following USABC and DOE FreedomCAR reference protocols for HEV to evaluate its dynamic pulse-power and energy features. It was demonstrated that with Vmax of 3.7 V at T> 30 °C the challenging energy and power targets stated by DOE for power-assist HEVs, and at T> 0 °C the standards for the 12V-TSS and 42V-FSS and TPA 2s-pulse applications are satisfied, if the ratio wmodule/wSC = 2 is accomplished, which, however, is a very demanding condition. Finally, suggestions for further advances in IL-based AEDLC performance were found. Particularly, given that the main contribution to the ESR is the electrode charging resistance, which in turn is affected by the ionic resistance in the pores that is also modulated by pore length, the pore geometry is a key parameter in carbon design not only because it defines the carbon surface but also because it can differentially “amplify” the effect of IL conductivity on the electrode charging-discharging process and, thus, supercapacitor time constant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.