864 resultados para Component-based systems
Resumo:
A methodology for developing an advanced communications system for the Deaf in a new domain is presented in this paper. This methodology is a user-centred design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. During the requirement analysis, both the user and technical requirements are evaluated and defined. For generating the parallel corpus, it is necessary to collect Spanish sentences in the new domain and translate them into LSE (Lengua de Signos Española: Spanish Sign Language). LSE is represented by glosses and using video recordings. This corpus is used for training the two main modules of the advanced communications system to the new domain: the spoken Spanish into the LSE translation module and the Spanish generation from the LSE module. The main aspects to be generated are the vocabularies for both languages (Spanish words and signs), and the knowledge for translating in both directions. Finally, the field evaluation is carried out with deaf people using the advanced communications system to interact with hearing people in several scenarios. In this evaluation, the paper proposes several objective and subjective measurements for evaluating the performance. In this paper, the new considered domain is about dialogues in a hotel reception. Using this methodology, the system was developed in several months, obtaining very good performance: good translation rates (10% Sign Error Rate) with small processing times, allowing face-to-face dialogues.
Resumo:
Determinar con buena precisión la posición en la que se encuentra un terminal móvil, cuando éste se halla inmerso en un entorno de interior (centros comerciales, edificios de oficinas, aeropuertos, estaciones, túneles, etc), es el pilar básico sobre el que se sustentan un gran número de aplicaciones y servicios. Muchos de esos servicios se encuentran ya disponibles en entornos de exterior, aunque los entornos de interior se prestan a otros servicios específicos para ellos. Ese número, sin embargo, podría ser significativamente mayor de lo que actualmente es, si no fuera necesaria una costosa infraestructura para llevar a cabo el posicionamiento con la precisión adecuada a cada uno de los hipotéticos servicios. O, igualmente, si la citada infraestructura pudiera tener otros usos distintos, además del relacionado con el posicionamiento. La usabilidad de la misma infraestructura para otros fines distintos ofrecería la oportunidad de que la misma estuviera ya presente en las diferentes localizaciones, porque ha sido previamente desplegada para esos otros usos; o bien facilitaría su despliegue, porque el coste de esa operación ofreciera un mayor retorno de usabilidad para quien lo realiza. Las tecnologías inalámbricas de comunicaciones basadas en radiofrecuencia, ya en uso para las comunicaciones de voz y datos (móviles, WLAN, etc), cumplen el requisito anteriormente indicado y, por tanto, facilitarían el crecimiento de las aplicaciones y servicios basados en el posicionamiento, en el caso de poderse emplear para ello. Sin embargo, determinar la posición con el nivel de precisión adecuado mediante el uso de estas tecnologías, es un importante reto hoy en día. El presente trabajo pretende aportar avances significativos en este campo. A lo largo del mismo se llevará a cabo, en primer lugar, un estudio de los principales algoritmos y técnicas auxiliares de posicionamiento aplicables en entornos de interior. La revisión se centrará en aquellos que sean aptos tanto para tecnologías móviles de última generación como para entornos WLAN. Con ello, se pretende poner de relieve las ventajas e inconvenientes de cada uno de estos algoritmos, teniendo como motivación final su aplicabilidad tanto al mundo de las redes móviles 3G y 4G (en especial a las femtoceldas y small-cells LTE) como al indicado entorno WLAN; y teniendo siempre presente que el objetivo último es que vayan a ser usados en interiores. La principal conclusión de esa revisión es que las técnicas de triangulación, comúnmente empleadas para realizar la localización en entornos de exterior, se muestran inútiles en los entornos de interior, debido a efectos adversos propios de este tipo de entornos como la pérdida de visión directa o los caminos múltiples en el recorrido de la señal. Los métodos de huella radioeléctrica, más conocidos bajo el término inglés “fingerprinting”, que se basan en la comparación de los valores de potencia de señal que se están recibiendo en el momento de llevar a cabo el posicionamiento por un terminal móvil, frente a los valores registrados en un mapa radio de potencias, elaborado durante una fase inicial de calibración, aparecen como los mejores de entre los posibles para los escenarios de interior. Sin embargo, estos sistemas se ven también afectados por otros problemas, como por ejemplo los importantes trabajos a realizar para ponerlos en marcha, y la variabilidad del canal. Frente a ellos, en el presente trabajo se presentan dos contribuciones originales para mejorar los sistemas basados en los métodos fingerprinting. La primera de esas contribuciones describe un método para determinar, de manera sencilla, las características básicas del sistema a nivel del número de muestras necesarias para crear el mapa radio de la huella radioeléctrica de referencia, junto al número mínimo de emisores de radiofrecuencia que habrá que desplegar; todo ello, a partir de unos requerimientos iniciales relacionados con el error y la precisión buscados en el posicionamiento a realizar, a los que uniremos los datos correspondientes a las dimensiones y realidad física del entorno. De esa forma, se establecen unas pautas iniciales a la hora de dimensionar el sistema, y se combaten los efectos negativos que, sobre el coste o el rendimiento del sistema en su conjunto, son debidos a un despliegue ineficiente de los emisores de radiofrecuencia y de los puntos de captura de su huella. La segunda contribución incrementa la precisión resultante del sistema en tiempo real, gracias a una técnica de recalibración automática del mapa radio de potencias. Esta técnica tiene en cuenta las medidas reportadas continuamente por unos pocos puntos de referencia estáticos, estratégicamente distribuidos en el entorno, para recalcular y actualizar las potencias registradas en el mapa radio. Un beneficio adicional a nivel operativo de la citada técnica, es la prolongación del tiempo de usabilidad fiable del sistema, bajando la frecuencia en la que se requiere volver a capturar el mapa radio de potencias completo. Las mejoras anteriormente citadas serán de aplicación directa en la mejora de los mecanismos de posicionamiento en interiores basados en la infraestructura inalámbrica de comunicaciones de voz y datos. A partir de ahí, esa mejora será extensible y de aplicabilidad sobre los servicios de localización (conocimiento personal del lugar donde uno mismo se encuentra), monitorización (conocimiento por terceros del citado lugar) y seguimiento (monitorización prolongada en el tiempo), ya que todos ellas toman como base un correcto posicionamiento para un adecuado desempeño. ABSTRACT To find the position where a mobile is located with good accuracy, when it is immersed in an indoor environment (shopping centers, office buildings, airports, stations, tunnels, etc.), is the cornerstone on which a large number of applications and services are supported. Many of these services are already available in outdoor environments, although the indoor environments are suitable for other services that are specific for it. That number, however, could be significantly higher than now, if an expensive infrastructure were not required to perform the positioning service with adequate precision, for each one of the hypothetical services. Or, equally, whether that infrastructure may have other different uses beyond the ones associated with positioning. The usability of the same infrastructure for purposes other than positioning could give the opportunity of having it already available in the different locations, because it was previously deployed for these other uses; or facilitate its deployment, because the cost of that operation would offer a higher return on usability for the deployer. Wireless technologies based on radio communications, already in use for voice and data communications (mobile, WLAN, etc), meet the requirement of additional usability and, therefore, could facilitate the growth of applications and services based on positioning, in the case of being able to use it. However, determining the position with the appropriate degree of accuracy using these technologies is a major challenge today. This paper provides significant advances in this field. Along this work, a study about the main algorithms and auxiliar techniques related with indoor positioning will be initially carried out. The review will be focused in those that are suitable to be used with both last generation mobile technologies and WLAN environments. By doing this, it is tried to highlight the advantages and disadvantages of each one of these algorithms, having as final motivation their applicability both in the world of 3G and 4G mobile networks (especially in femtocells and small-cells of LTE) and in the WLAN world; and having always in mind that the final aim is to use it in indoor environments. The main conclusion of that review is that triangulation techniques, commonly used for localization in outdoor environments, are useless in indoor environments due to adverse effects of such environments as loss of sight or multipaths. Triangulation techniques used for external locations are useless due to adverse effects like the lack of line of sight or multipath. Fingerprinting methods, based on the comparison of Received Signal Strength values measured by the mobile phone with a radio map of RSSI Recorded during the calibration phase, arise as the best methods for indoor scenarios. However, these systems are also affected by other problems, for example the important load of tasks to be done to have the system ready to work, and the variability of the channel. In front of them, in this paper we present two original contributions to improve the fingerprinting methods based systems. The first one of these contributions describes a method for find, in a simple way, the basic characteristics of the system at the level of the number of samples needed to create the radio map inside the referenced fingerprint, and also by the minimum number of radio frequency emitters that are needed to be deployed; and both of them coming from some initial requirements for the system related to the error and accuracy in positioning wanted to have, which it will be joined the data corresponding to the dimensions and physical reality of the environment. Thus, some initial guidelines when dimensioning the system will be in place, and the negative effects into the cost or into the performance of the whole system, due to an inefficient deployment of the radio frequency emitters and of the radio map capture points, will be minimized. The second contribution increases the resulting accuracy of the system when working in real time, thanks to a technique of automatic recalibration of the power measurements stored in the radio map. This technique takes into account the continuous measures reported by a few static reference points, strategically distributed in the environment, to recalculate and update the measurements stored into the map radio. An additional benefit at operational level of such technique, is the extension of the reliable time of the system, decreasing the periodicity required to recapture the radio map within full measurements. The above mentioned improvements are directly applicable to improve indoor positioning mechanisms based on voice and data wireless communications infrastructure. From there, that improvement will be also extensible and applicable to location services (personal knowledge of the location where oneself is), monitoring (knowledge by other people of your location) and monitoring (prolonged monitoring over time) as all of them are based in a correct positioning for proper performance.
Resumo:
Background: Component-based diagnosis on multiplex platforms is widely used in food allergy but its clinical performance has not been evaluated in nut allergy. Objective: To assess the diagnostic performance of a commercial protein microarray in the determination of specific IgE (sIgE) in peanut, hazelnut, and walnut allergy. Methods: sIgE was measured in 36 peanut-allergic, 36 hazelnut-allergic, and 44 walnut-allergic patients by ISAC 112, and subsequently, sIgE against available components was determined by ImmunoCAP in patients with negative ISAC results. ImmunoCAP was also used to measure sIgE to Ara h 9, Cor a 8, and Jug r 3 in a subgroup of lipid transfer protein (LTP)-sensitized nut-allergic patients (positive skin prick test to LTP-enriched extract). sIgE levels by ImmunoCAP were compared with ISAC ranges. Results: Most peanut-, hazelnut-, and walnut-allergic patients were sensitized to the corresponding nut LTP (Ara h 9, 66.7%; Cor a 8, 80.5%; Jug r 3, 84% respectively). However, ISAC did not detect sIgE in 33.3% of peanut-allergic patients, 13.9% of hazelnut-allergic patients, or 13.6% of walnut-allergic patients. sIgE determination by ImmunoCAP detected sensitization to Ara h 9, Cor a 8, and Jug r 3 in, respectively, 61.5% of peanut-allergic patients, 60% of hazelnut-allergic patients, and 88.3% of walnut-allergic patients with negative ISAC results. In the subgroup of peach LTP?sensitized patients, Ara h 9 sIgE was detected in more cases by ImmunoCAP than by ISAC (94.4% vs 72.2%, P<.05). Similar rates of Cor a 8 and Jug r 3 sensitization were detected by both techniques. Conclusions: The diagnostic performance of ISAC was adequate for hazelnut and walnut allergy but not for peanut allergy. sIgE sensitivity against Ara h 9 in ISAC needs to be improved.
Resumo:
Melanin-concentrating hormone (MCH) is a 19-aa cyclic neuropeptide originally isolated from chum salmon pituitaries. Besides its effects on the aggregation of melanophores in fish several lines of evidence suggest that in mammals MCH functions as a regulator of energy homeostasis. Recently, several groups reported the identification of an orphan G protein-coupled receptor as a receptor for MCH (MCH-1R). We hereby report the identification of a second human MCH receptor termed MCH-2R, which shares about 38% amino acid identity with MCH-1R. MCH-2R displayed high-affinity MCH binding, resulting in inositol phosphate turnover and release of intracellular calcium in mammalian cells. In contrast to MCH-1R, MCH-2R signaling is not sensitive to pertussis toxin and MCH-2R cannot reduce forskolin-stimulated cAMP production, suggesting an exclusive Gαq coupling of the MCH-2R in cell-based systems. Northern blot and in situ hybridization analysis of human and monkey tissue shows that expression of MCH-2R mRNA is restricted to several regions of the brain, including the arcuate nucleus and the ventral medial hypothalamus, areas implicated in regulation of body weight. In addition, the human MCH-2R gene was mapped to the long arm of chromosome 6 at band 6q16.2–16.3, a region reported to be associated with cytogenetic abnormalities of obese patients. The characterization of a second mammalian G protein-coupled receptor for MCH potentially indicates that the control of energy homeostasis in mammals by the MCH neuropeptide system may be more complex than initially anticipated.
Resumo:
The challenge of the Human Genome Project is to increase the rate of DNA sequence acquisition by two orders of magnitude to complete sequencing of the human genome by the year 2000. The present work describes a rapid detection method using a two-dimensional optical wave guide that allows measurement of real-time binding or melting of a light-scattering label on a DNA array. A particulate label on the target DNA acts as a light-scattering source when illuminated by the evanescent wave of the wave guide and only the label bound to the surface generates a signal. Imaging/visual examination of the scattered light permits interrogation of the entire array simultaneously. Hybridization specificity is equivalent to that obtained with a conventional system using autoradiography. Wave guide melting curves are consistent with those obtained in the liquid phase and single-base discrimination is facile. Dilution experiments showed an apparent lower limit of detection at 0.4 nM oligonucleotide. This performance is comparable to the best currently known fluorescence-based systems. In addition, wave guide detection allows manipulation of hybridization stringency during detection and thereby reduces DNA chip complexity. It is anticipated that this methodology will provide a powerful tool for diagnostic applications that require rapid cost-effective detection of variations from known sequences.
Resumo:
No início dos anos 2000 consolidou-se o quadro de significativas alterações e ajustes nas estratégias das organizações agrícolas. Destacam-se: a consolidação das organizações, a internacionalização dos sistemas de base agrícola, a inovação presente em processos e produtos e de natureza organizacional, a introdução da variável socioambiental, e a adoção de estratégias de transparência. A cooperação pode exigir investimentos especializados, e os incentivos para a sua realização dependem de mecanismos de controle de custos de transação. Com a presença de incerteza no ambiente econômico e nas transações, a flexibilidade planejada visa eventuais ajustes em face de eventos inesperados. Arranjos institucionais complexos (leia-se, contratos) são observados como forma de responder a necessidades apontadas. Além de confiança, reputação, e mecanismos relacionais, a evolução dos mecanismos sociais por trás dos contratos de sociedade é algo a ser desenvolvido. O presente estudo propõe que as cooperativas agrícolas podem desenvolver mecanismos de governança que geram uma competência adaptativa para enfrentar eventos inesperados. O presente estudo explorou uma visão retrospectiva de estratégias adotadas por cooperativas brasileiras. Assumiu-se aqui uma nova vertente analítica da \"História de Negócios\" e suas implicações voltadas ao sistema agroindustrial. Como diretriz de método, foram seguidas as etapas de identificação das principais estratégias relatadas nos estudos de casos escolhidos, sobre cooperativas, desenvolvidos entre 1991 e 2002. Na sequência compararam-se as estratégias com as diretrizes apresentadas no capítulo teórico. Admite-se que as estratégias que implicam em maiores investimentos em ativos específicos tendem a tornarem mais rígidos os arranjos e dificultam a plasticidade, ou adaptação, das cooperativas agrícolas - onde naturalmente as mudanças ocorrem de forma mais lenta - frente a choques ou eventos externos.
Resumo:
We show how hydrogenation of graphene nanoribbons at small concentrations can open venues toward carbon-based spintronics applications regardless of any specific edge termination or passivation of the nanoribbons. Density-functional theory calculations show that an adsorbed H atom induces a spin density on the surrounding π orbitals whose symmetry and degree of localization depends on the distance to the edges of the nanoribbon. As expected for graphene-based systems, these induced magnetic moments interact ferromagnetically or antiferromagnetically depending on the relative adsorption graphene sublattice, but the magnitude of the interactions are found to strongly vary with the position of the H atoms relative to the edges. We also calculate, with the help of the Hubbard model, the transport properties of hydrogenated armchair semiconducting graphene nanoribbons in the diluted regime and show how the exchange coupling between H atoms can be exploited in the design of novel magnetoresistive devices.
Resumo:
Corneal and anterior segment imaging techniques have become a crucial tool in the clinical practice of ophthalmology, with a great variety of applications, such as corneal curvature and pachymetric analysis, detection of ectatic corneal conditions, anatomical study of the anterior segment prior to phakic intraocular lens implantation, or densitometric analysis of the crystalline lens. From the Placido-based systems that allow only a characterization of the geometry of the anterior corneal surface to the Scheimpflug photography-based systems that provide a characterization of the cornea, anterior chamber, and crystalline lens, there is a great variety of devices with the capability of analyzing different anatomical parameters with very high precision. To date, Scheimpflug photography-based systems are the devices providing the more complete analysis of the anterior segment in a non-invasive way. More developments are required in anterior segment imaging technologies in order to improve the analysis of the crystalline lens structure as well as the ocular structures behind the iris in a non-invasive way when the pupil is not dilated.
Resumo:
Feature selection is an important and active issue in clustering and classification problems. By choosing an adequate feature subset, a dataset dimensionality reduction is allowed, thus contributing to decreasing the classification computational complexity, and to improving the classifier performance by avoiding redundant or irrelevant features. Although feature selection can be formally defined as an optimisation problem with only one objective, that is, the classification accuracy obtained by using the selected feature subset, in recent years, some multi-objective approaches to this problem have been proposed. These either select features that not only improve the classification accuracy, but also the generalisation capability in case of supervised classifiers, or counterbalance the bias toward lower or higher numbers of features that present some methods used to validate the clustering/classification in case of unsupervised classifiers. The main contribution of this paper is a multi-objective approach for feature selection and its application to an unsupervised clustering procedure based on Growing Hierarchical Self-Organising Maps (GHSOMs) that includes a new method for unit labelling and efficient determination of the winning unit. In the network anomaly detection problem here considered, this multi-objective approach makes it possible not only to differentiate between normal and anomalous traffic but also among different anomalies. The efficiency of our proposals has been evaluated by using the well-known DARPA/NSL-KDD datasets that contain extracted features and labelled attacks from around 2 million connections. The selected feature sets computed in our experiments provide detection rates up to 99.8% with normal traffic and up to 99.6% with anomalous traffic, as well as accuracy values up to 99.12%.
Resumo:
The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.
Resumo:
Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.
Resumo:
Integrity assurance of configuration data has a significant impact on microcontroller-based systems reliability. This is especially true when running applications driven by events which behavior is tightly coupled to this kind of data. This work proposes a new hybrid technique that combines hardware and software resources for detecting and recovering soft-errors in system configuration data. Our approach is based on the utilization of a common built-in microcontroller resource (timer) that works jointly with a software-based technique, which is responsible to periodically refresh the configuration data. The experiments demonstrate that non-destructive single event effects can be effectively mitigated with reduced overheads. Results show an important increase in fault coverage for SEUs and SETs, about one order of magnitude.
Resumo:
This contribution focuses on analyzing the quality of democracy of the United States (U.S.) and of Austria by using a comparative approach. Even though comparisons are not the only possible or legitimate method of research, this analysis is based on the opinion that comparisons provide crucial analytical perspectives and learning opportunities. Following is the proposition, put directly forward: national political systems (political systems) are comprehensively understood only by using an international comparative approach. International comparisons (of country-based systems) are common (see the status of comparative politics, for example in Sodaro, 2004). Comparisons do not have to be based necessarily on national systems alone, but can also be carried out using “within”-comparisons inside (or beyond) sub-units or regional sub-national systems, for instance the individual provinces in the case of Austria (Campbell, 2007, p. 382).
Resumo:
We have measured the 3He/4He and 4He/20Ne ratios and chemical compositions of gases exsolved from deep-sea sediments at two sites (798 and 799) in the Japan Sea. The 3He/4He and 4He/20Ne ratios vary from 0.642 Ratm (where Ratm is the atmospheric 3He/4He ratio of 1.393*10**-6) to 0.840 Ratm, and from 0.41 to 4.5, respectively. Helium in the samples can be explained by the mixing between atmospheric helium dissolved in bottom water of the Japan Sea and crustal helium in the sediment. The sedimentary helium is enriched in mantle-derived 3He compared with those from the Japan Trench and the Nankai Trough. This suggests that the basement of the Japan Sea has relatively large remnants of mantle-derived helium compared with that of the Pacific. Major chemical compositions of the samples are methane and nitrogen. There is a positive correlation between methane content and helium content corrected for air component. Based on the 3He/4He-Sum C/3He diagram, the major part of methane can be attributed to crustal and/or organic origin.
Resumo:
"College of Engineering, UILU-ENG-89-1757."