932 resultados para Interoperability of Applications
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.
Resumo:
Fractal theory presents a large number of applications to image and signal analysis. Although the fractal dimension can be used as an image object descriptor, a multiscale approach, such as multiscale fractal dimension (MFD), increases the amount of information extracted from an object. MFD provides a curve which describes object complexity along the scale. However, this curve presents much redundant information, which could be discarded without loss in performance. Thus, it is necessary the use of a descriptor technique to analyze this curve and also to reduce the dimensionality of these data by selecting its meaningful descriptors. This paper shows a comparative study among different techniques for MFD descriptors generation. It compares the use of well-known and state-of-the-art descriptors, such as Fourier, Wavelet, Polynomial Approximation (PA), Functional Data Analysis (FDA), Principal Component Analysis (PCA), Symbolic Aggregate Approximation (SAX), kernel PCA, Independent Component Analysis (ICA), geometrical and statistical features. The descriptors are evaluated in a classification experiment using Linear Discriminant Analysis over the descriptors computed from MFD curves from two data sets: generic shapes and rotated fish contours. Results indicate that PCA, FDA, PA and Wavelet Approximation provide the best MFD descriptors for recognition and classification tasks. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
[ES] La irrupción actual de los teléfonos inteligentes (smartphones) equipados con diversos sensores y herramientas nativas, propicia la posibilidad de crear una gran gama de aplicaciones para mejorar la vida de personas con discapacidades. Con este proyecto se pretende cubrir estos objetivos: explorar las distintas posibilidades que ofrece la plataforma Android para implementar métodos de interacción hombre-máquina adaptados a personas con discapacidad visual. Identificar las problemáticas que afectan a las personas con discapacidad visual en el ámbito sociosanitario. Desarrollar una aplicación de carácter social que contribuya a mejorar la calidad de vida de estas personas. Como resultado del trabajo, se ha desarrollado una aplicación software llamada LeeMed, que consiste en una app para la plataforma Android, dirigida a personas con discapacidad visual, para la consulta de prospectos de medicamentos a través de múltiples interfaces humanas. El trabajo ha abordado tres tipos de interfaces: la oral (órdenes de voz), la gestual y la convencional de menús y opciones (GUI)
Resumo:
Programa de Doctorado: Ingeniería de Telecomunicación Avanzada.
Resumo:
[EN]This presentation will give examples on how multi-parameter platforms have been used in a variety of applications ranging from shallow coastal on-line observatories down to measuring in the deepest Ocean trenches. Focus will be on projects in which optode technology (primarily for CO2 and O2) has served to study different aspects of the carbon system including primary production/consumption, air-sea exchange, leakage detection from underwater storage of CO2 and measurements from moving platforms like gliders and ferries. The performance of recently developed pH optodes will als
Resumo:
Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.
Resumo:
[EN]Low cost real-time depth cameras offer new sensors for a wide field of applications apart from the gaming world. Other active research scenarios as for example surveillance, can take ad- vantage of the capabilities offered by this kind of sensors that integrate depth and visual information. In this paper, we present a system that operates in a novel application context for these devices, in troublesome scenarios where illumination conditions can suffer sudden changes. We focus on the people counting problem with re-identification and trajectory analysis.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Nanofiltration (NF) is a pressure-driven membrane process, intermediate between reverse osmosis and ultrafiltration. Commercially available polymeric membranes have been used in a wide range of applications, such as drinking, process industry and waste water treatment. For all the applications requiring high stability and harsh washing procedures inorganic membranes are preferred due to their high chemical inertia. Typically, γ – Al2O3 as well as TiO2 and ZrO2 selective layers are used; the latter show higher chemical stability in a wide range of pH and temperatures. In this work the experimental characterization of two different type of membrane has been performed in order to investigate permeation properties, separation performance and efficiency with aqueous solutions containing strong inorganic electrolytes. The influence of salt concentration and feed pH as well as the role of concentration polarization and electrolyte type on the membrane behavior are investigated. Experimentation was performed testing a multi–layer structured NF membrane in α-Al2O3, TiO2 and ZrO2, and a polymeric membrane, in polyamide supported on polysulfone, with binary aqueous solutions containing NaCl, Na2SO4 or CaCl2; the effect of salt composition and pH in the feed side was studied both on flux and salt rejection. All the NF experimental data available for the two membranes were used to evaluate the volumetric membrane charge (X) corresponding to each operative conditions investigated, through the Donnan Steric Pore Model and Dielectric Exclusion (DSPM&DE). The results obtained allow to understand which are the main phenomena at the basis of the different behaviors observed.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
Electromagnetic spectrum can be identified as a resource for the designer, as well as for the manufacturer, from two complementary points of view: first, because it is a good in great demand by many different kind of applications; second, because despite its scarce availability, it may be advantageous to use more spectrum than necessary. This is the case of Spread-Spectrum Systems, those systems in which the transmitted signal is spread over a wide frequency band, much wider, in fact, than the minimum bandwidth required to transmit the information being sent. Part I of this dissertation deals with Spread-Spectrum Clock Generators (SSCG) aiming at reducing Electro Magnetic Interference (EMI) of clock signals in integrated circuits (IC) design. In particular, the modulation of the clock and the consequent spreading of its spectrum are obtained through a random modulating signal outputted by a chaotic map, i.e. a discrete-time dynamical system showing chaotic behavior. The advantages offered by this kind of modulation are highlighted. Three different prototypes of chaos-based SSCG are presented in all their aspects: design, simulation, and post-fabrication measurements. The third one, operating at a frequency equal to 3GHz, aims at being applied to Serial ATA, standard de facto for fast data transmission to and from Hard Disk Drives. The most extreme example of spread-spectrum signalling is the emerging ultra-wideband (UWB) technology, which proposes the use of large sections of the radio spectrum at low amplitudes to transmit high-bandwidth digital data. In part II of the dissertation, two UWB applications are presented, both dealing with the advantages as well as with the challenges of a wide-band system, namely: a chaos-based sequence generation method for reducing Multiple Access Interference (MAI) in Direct Sequence UWB Wireless-Sensor-Networks (WSNs), and design and simulations of a Low-Noise Amplifier (LNA) for impulse radio UWB. This latter topic was studied during a study-abroad period in collaboration with Delft University of Technology, Delft, Netherlands.
Resumo:
Organotin compounds have found in the last few decades a wide variety of applications. Indeed, they are used successfully as antifouling paints, PVC stabilizers and ion carriers, as well as homogeneous catalysts. In this context, it has been proved that the Lewis acidity of the metal centre allows these compounds to promote the reaction between alcohol and ester. However their use is now limited by their well-known toxicity, moreover they are hardly removable from the reaction mixture. This problem can be overcome by grafting the organotin derivative onto a polymeric cross-linked support. In this way the obtained heterogeneous catalyst can be easily filtered off from the reaction mixture, thus creating the so-called "clean organotin reagents", avoiding the presence of toxic organotin residues in solution and the tin release in the environment. In the last few years several insoluble polystyrene resins containing triorganotin carboxylate moieties have been synthesized with the aim of improving their catalytic activity: in particular we have investigated and opportunely modified their chemical structure in order to optimize the accessibility to the metal centre and its Lewis acidity. Recently, we replaced the polymeric matrix with an inorganic one, in order to dispose of a relatively cheaper and easily available support. For this purpose an ordered mesoporous silica, characterized by 2D-hexagonal pores, named MCM-41, and an amorphous silica have been selected. In the present work two kinds of MCM-41 silica containing the triorganotin carboxylate moiety have been synthesized starting from a commercial Cab-O-Sil M5 silica. These catalysts have two different spacers between the core and the tin-carboxylate moiety, namely a polyaliphatic chain (compound FT29) or a poliethereal one (compound FT6), with the aim to improve the interaction between catalyst and reacting ester. Three catalysts supported onto an amorphous silica have been also synthesized: the structure is the same as silica FT29, i.e. a compound having a polialiphatic chain, and they have different percentage of organotin derivative grafted on the silica surface (10, 30, 50% respectively for silica MB9, SU27 and SU28). The performances of the above silica as heterogeneous catalysts in transesterification reactions have been tested in a model reaction between ethyl acetate and 1-octanol, a primary alcohol sensitive to the reaction conditions. The alcohol conversion was assessed by gas-chromatography, determining the relative amount of transesterified product and starting alcohol after established time intervals.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
In the past decade, block copolymers (BCPs) have attracted increasing scientific and technological interest because of their inherent capability to spontaneously self-assemble into ordered arrays of nanostructures. The importance of nanostructures in a number of applications has fostered the need for well-defined, complex macromolecular architectures. In this thesis, the influence of macromolecular architecture on the bulk morphologies of novel linear-hyperbranched and linear brush-like diblock copolymer structure is investigated. An innovative, generally applicable strategy for the preparation of these defined diblock copolymers, consisting of linear polystyrene and branched polycarbosilane blocks, is demonstrated. Furthermore, complete characterization and solid-state morphological studies are provided. Finally, the concept is extended to linear-hyperbrached and linear brush-like polyalkoxysilanes. A shift of the classical phase boundaries to higher PS weight fractions as well as the appearance of new morphologies confirms the dramatic effect that polymer topology has on the morphology of BCPs.
Resumo:
Hydrogen peroxide (H2O2) is a powerful oxidant which is commonly used in a wide range of applications in the industrial field. Several methods for the quantification of H2O2 have been developed. Among them, electrochemical methods exploit the ability of some hexacyanoferrates (such as Prussian Blue) to detect H2O2 at potentials close to 0.0 V (vs. SCE) avoiding the occurrence of secondary reactions, which are likely to run at large overpotentials. This electrocatalytic behaviour makes hexacyanoferrates excellent redox mediators. When deposited in the form of thin films on the electrode surfaces, they can be employed in the fabrication of sensors and biosensors, normally operated in solutions at pH values close to physiological ones. As hexacyanoferrates show limited stability in not strongly acidic solutions, it is necessary to improve the configuration of the modified electrodes to increase the stability of the films. In this thesis work, organic conducting polymers were used to fabricate composite films with Prussian Blue (PB) to be electro-deposited on Pt surfaces, in order to increase their pH stability. Different electrode configurations and different methods of synthesis of both components were tested, and for each one the achievement of a possible increase in the operational stability of Prussian Blue was verified. Good results were obtained for the polymer 3,3''-didodecyl-2,2':5',2''-terthiophene (poly(3,3''-DDTT)), whose presence created a favourable microenvironment for the electrodeposition of Prussian Blue. The electrochemical behaviour of the modified electrodes was studied in both aqueous and organic solutions. Poly(3,3''-DDTT) showed no response in aqueous solution in the potential range where PB is electroactive, thus in buffered aqueous solution is was possible to characterize the composite material, focusing only on the redox behaviour of PB. A combined effect of anion and cation of the supporting electrolyte was noticed. The response of Pt electrodes modified with films of the PB /poly(3,3''-DDTT) composite was evaluated for the determination of H2O2. The performance of such films was found better than that of the PB alone. It can be concluded that poly(3,3''-DDTT) plays a key role in the stabilization of Prussian Blue causing also a wider linearity range for the electrocatalytic response to H2O2.