893 resultados para Project-based system
Resumo:
Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.
Resumo:
Este proyecto se enmarca en la utlización de métodos formales (más precisamente, en la utilización de teoría de tipos) para garantizar la ausencia de errores en programas. Por un lado se plantea el diseño de nuevos algoritmos de chequeo de tipos. Para ello, se proponen nuevos algoritmos basados en la idea de normalización por evaluación que sean extensibles a otros sistemas de tipos. En el futuro próximo extenderemos resultados que hemos conseguido recientemente [16,17] para obtener: una simplificación de los trabajos realizados para sistemas sin regla eta (acá se estudiarán dos sistemas: a la Martin Löf y a la PTS), la formulación de estos chequeadores para sistemas con variables, generalizar la noción de categoría con familia utilizada para dar semántica a teoría de tipos, obtener una formulación categórica de la noción de normalización por evaluación y finalmente, aplicar estos algoritmos a sistemas con reescrituras. Para los primeros resultados esperados mencionados, nos proponemos como método adaptar las pruebas de [16,17] a los nuevos sistemas. La importancia radica en que permitirán tornar más automatizables (y por ello, más fácilmente utilizables) los asistentes de demostración basados en teoría de tipos. Por otro lado, se utilizará la teoría de tipos para certificar compiladores, intentando llevar adelante la propuesta nunca explorada de [22] de utilizar un enfoque abstracto basado en categorías funtoriales. El método consistirá en certificar el lenguaje "Peal" [29] y luego agregar sucesivamente funcionalidad hasta obtener Forsythe [23]. En este período esperamos poder agregar varias extensiones. La importancia de este proyecto radica en que sólo un compilador certificado garantiza que un programa fuente correcto se compile a un programa objeto correcto. Es por ello, crucial para todo proceso de verificación que se base en verificar código fuente. Finalmente, se abordará la formalización de sistemas con session types. Los mismos han demostrado tener fallas en sus formulaciones [30], por lo que parece conveniente su formalización. Durante la marcha de este proyecto, esperamos tener alguna formalización que dé lugar a un algoritmo de chequeo de tipos y a demostrar las propiedades usuales de los sistemas. La contribución es arrojar un poco de luz sobre estas formulaciones cuyos errores revelan que el tema no ha adquirido aún suficiente madurez o comprensión por parte de la comunidad. This project is about using type theory to garantee program correctness. It follows three different directions: 1) Finding new type-checking algorithms based on normalization by evaluation. First, we would show that recent results like [16,17] extend to other type systems like: Martin-Löf´s type theory without eta rule, PTSs, type systems with variables (in addition to systems in [16,17] which are a la de Bruijn), systems with rewrite rules. This will be done by adjusting the proofs in [16,17] so that they apply to such systems as well. We will also try to obtain a more general definition of categories with families and normalization by evaluation, formulated in categorical terms. We expect this may turn proof-assistants more automatic and useful. 2) Exploring the proposal in [22] to compiler construction for Algol-like languages using functorial categories. According to [22] such approach is suitable for verifying compiler correctness, claim which was never explored. First, the language Peal [29] will be certified in type theory and we will gradually add funtionality to it until a correct compiler for the language Forsythe [23] is obtained. 3) Formilizing systems for session types. Several proposals have shown to be faulty [30]. This means that a formalization of it may contribute to the general understanding of session types.
Resumo:
Although the ASP model has been around for over a decade, it has not achieved the expected high level of market uptake. This research project examines the past and present state of ASP adoption and identifies security as a primary factor influencing the uptake of the model. The early chapters of this document examine the ASP model and ASP security in particular. Specifically, the literature and technology review chapter analyses ASP literature, security technologies and best practices with respect to system security in general. Based on this investigation, a prototype to illustrate the range and types of technologies that encompass a security framework was developed and is described in detail. The latter chapters of this document evaluate the practical implementation of system security in an ASP environment. Finally, this document outlines the research outputs, including the conclusions drawn and recommendations with respect to system security in an ASP environment. The primary research output is the recommendation that by following best practices with respect to security, an ASP application can provide the same level of security one would expect from any other n-tier client-server application. In addition, a security evaluation matrix, which could be used to evaluate not only the security of ASP applications but the security of any n-tier application, was developed by the author. This thesis shows that perceptions with regard to fears of inadequate security of ASP solutions and solution data are misguided. Finally, based on the research conducted, the author recommends that ASP solutions should be developed and deployed on tried, tested and trusted infrastructure. Existing Application Programming Interfaces (APIs) should be used where possible and security best practices should be adhered to where feasible.
Resumo:
This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2009
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2015
Resumo:
Synchronization of data coming from different sources is of high importance in biomechanics to ensure reliable analyses. This synchronization can either be performed through hardware to obtain perfect matching of data, or post-processed digitally. Hardware synchronization can be achieved using trigger cables connecting different devices in many situations; however, this is often impractical, and sometimes impossible in outdoors situations. The aim of this paper is to describe a wireless system for outdoor use, allowing synchronization of different types of - potentially embedded and moving - devices. In this system, each synchronization device is composed of: (i) a GPS receiver (used as time reference), (ii) a radio transmitter, and (iii) a microcontroller. These components are used to provide synchronized trigger signals at the desired frequency to the measurement device connected. The synchronization devices communicate wirelessly, are very lightweight, battery-operated and thus very easy to set up. They are adaptable to every measurement device equipped with either trigger input or recording channel. The accuracy of the system was validated using an oscilloscope. The mean synchronization error was found to be 0.39 μs and pulses are generated with an accuracy of <2 μs. The system provides synchronization accuracy about two orders of magnitude better than commonly used post-processing methods, and does not suffer from any drift in trigger generation.
Resumo:
During the past few decades, numerous plasmid vectors have been developed for cloning, gene expression analysis, and genetic engineering. Cloning procedures typically rely on PCR amplification, DNA fragment restriction digestion, recovery, and ligation, but increasingly, procedures are being developed to assemble large synthetic DNAs. In this study, we developed a new gene delivery system using the integrase activity of an integrative and conjugative element (ICE). The advantage of the integrase-based delivery is that it can stably introduce a large DNA fragment (at least 75 kb) into one or more specific sites (the gene for glycine-accepting tRNA) on a target chromosome. Integrase recombination activity in Escherichia coli is kept low by using a synthetic hybrid promoter, which, however, is unleashed in the final target host, forcing the integration of the construct. Upon integration, the system is again silenced. Two variants with different genetic features were produced, one in the form of a cloning vector in E. coli and the other as a mini-transposable element by which large DNA constructs assembled in E. coli can be tagged with the integrase gene. We confirmed that the system could successfully introduce cosmid and bacterial artificial chromosome (BAC) DNAs from E. coli into the chromosome of Pseudomonas putida in a site-specific manner. The integrase delivery system works in concert with existing vector systems and could thus be a powerful tool for synthetic constructions of new metabolic pathways in a variety of host bacteria.
Resumo:
Les comunicacions cooperatives estan guanyant un gran interès en les comunicacions modernes degut a que permeten millorar la transmissió dʼinformació entre un emissor i un receptor utilitzant una sèrie de terminals situats entre ells. Aquest projecte és un estudi complet del sistemes cooperatius, analitzant el seu rendiment i comparant lʼús dʼun sol dʼaquests terminals amb lʼús del codi Alamouti, que utilitza dos terminals. Primer hi ha una introducció als sistemes cooperatius i a la teoria de la informació. Després hem estudiat un sistema cooperatiu amb la teoria de la informació com a base, en termes de probabilitat de fallada del sistema, i posteriorment lʼhem adaptat a un sistema cooperatiu real utilitzant una modulació QPSK, estudiant la seva probabilitat dʼerror de paquet. Finalment es proposen diversos protocols que permeten millorar el rendiment del sistema cooperatiu estudiat.
Resumo:
Report for the scientific sojourn carried out at the Department of Freshwater Ecology, National Environmetal Research Institute, Denmark, from 2006 to 2008. The main objective of the project was to reconstruct photosynthetic organism community composition using pigmentbased methods and to study their response to natural (e.g. climate) or anthropogenic (e.g. eutrophication) perturbations that took place in the system over time. We performed a study in different locations and at different temporal scales. We analysed the pigment composition in a short sediment record (46 cm sediment depth) of a volcanic lake (Lake Furnas) in the Azores Archipelago (Portugal). The lake has been affected during the last century by successive fish introductions. The specific objective was to reconstruct the lake’s trophic state history and to assess the role of land-use, climate and fish introductions in structuring the lake community. Results obtained suggested that whereas trophic cascade and changes in nutrient concentrations have some clear effects on algal and microbial assemblages, interpreting the effects of changes in climate are not straightforward. This is probably related with the rather constant precipitation in the Azores Islands during the studied period. We also analysed the pigment composition in a long sediment record (1800 cm sediment depth) of Lake Aborre (Denmark) covering ca. 8kyr of lake history. The specific objective was to describe changes in lake primary production and lake trophic state over the Holocene and to determine the photosynthetic organisms involved. Results suggested that external forcing (i.e. land use changes) was responsible of erosion and nutrient run off to the lake that contributed to the reported changes in lake primary production along most of the Holocene.
Resumo:
Personalization in e-learning allows the adaptation of contents, learning strategiesand educational resources to the competencies, previous knowledge or preferences of the student. This project takes a multidisciplinary perspective for devising standards-based personalization capabilities into virtual e-learning environments, focusing on the conceptof adaptive learning itinerary, using reusable learning objects as the basis of the system and using ontologies and semantic web technologies.
Resumo:
La idea subjacent al Projecte de Recerca i Desenvolupament COINE és permetre a la gent contar les seves pròpies històries. COINE pretén proporcionar les eines necessàries per crear estructuradament, un entorn basat en el World Wide Web, que permeti compartir continguts. Els resultats del Projecte ajudaran al desenvolupament d'estàndards per a la implantació i la recuperació estructurades de recursos digitals en entorns en xarxa distribuïda. El Projecte de COINE s'inicià el març de 2002 i finalitzà l'agost de 2004. Avui en dia estem al WorkPackage 5 on estem construint el Sistema, el programari i les interfícies. COINE pretén cobrir la gamma més àmplia possible d'usuaris potencials, des d'organitzacions de patrimoni cultural i institucions de qualsevol mida (principalment biblioteques, arxius i museus) fins a individus de qualsevol edat sense habilitats en l'ús de les TIC, o a grups petits de ciutadans. Els usuaris no utilitzaran només COINE com a eina de cerca, sinó que contribuiran amb el seu propi contingut.
Resumo:
OBJECTIVES: To test whether the Global Positioning System (GPS) could be potentially useful to assess the velocity of walking and running in humans. SUBJECT: A young man was equipped with a GPS receptor while walking running and cycling at various velocity on an athletic track. The speed of displacement assessed by GPS, was compared to that directly measured by chronometry (76 tests). RESULTS: In walking and running conditions (from 2-20 km/h) as well as cycling conditions (from 20-40 km/h), there was a significant relationship between the speed assessed by GPS and that actually measured (r = 0.99, P < 0.0001) with little bias in the prediction of velocity. The overall error of prediction (s.d. of difference) averaged +/-0.8 km/h. CONCLUSION: The GPS technique appears very promising for speed assessment although the relative accuracy at walking speed is still insufficient for research purposes. It may be improved by using differential GPS measurement.
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
In a search for new sensor systems and new methods for underwater vehicle positioning based on visual observation, this paper presents a computer vision system based on coded light projection. 3D information is taken from an underwater scene. This information is used to test obstacle avoidance behaviour. In addition, the main ideas for achieving stabilisation of the vehicle in front of an object are presented