953 resultados para Reliable Computations
Resumo:
This dissertation presents an approach aimed at three-dimensional perception’s obstacle detection on all-terrain robots. Given the huge amount of acquired information, the adversities such environments present to an autonomous system and the swiftness, thus required, from each of its navigation decisions, it becomes imperative that the 3-D perceptional system to be able to map obstacles and passageways in the most swift and detailed manner. In this document, a hybrid approach is presented bringing the best of several methods together, combining the lightness of lesser meticulous analyses with the detail brought by more thorough ones. Realizing the former, a terrain’s slope mapping system upon a low resolute volumetric representation of the surrounding occupancy. For the latter’s detailed evaluation, two novel metrics were conceived to discriminate the little depth discrepancies found in between range scanner’s beam distance measurements. The hybrid solution resulting from the conjunction of these two representations provides a reliable answer to traversability mapping and a robust discrimination of penetrable vegetation from that constituting real obstructions. Two distinct robotic platforms offered the possibility to test the hybrid approach on very different applications: a boat, under an European project, the ECHORD Riverwatch, and a terrestrial four-wheeled robot for a national project, the Introsys Robot.
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
The formulation and use of lime mortars with ceramic particles has, in the past, been a very common technique. Knowledge of such used techniques and materials is fundamental for the successful rehabilitation and conservation of the built heritage. The durability that these mortars have shown encourages the study of the involved mechanisms, so that they may be adapted to the current reality. The considerable amount of waste from old ceramics factories which is sent for disposal might present an opportunity for the production of reliable improved lime mortars. In this paper a number of studies that characterize old building mortars containing ceramic fragments are reviewed. The most important research undertaken on laboratory prepared mortars with several heat treated clays types is presented, specifically with incorporated ceramic waste. Some studies on the pozzolanicity of heat treated clays are examined and the heating temperatures that seem most likely to achieve pozzolanicity are presented. It was verified that some heating temperatures currently used by ceramic industries might correspond to the temperatures that will achieve pozzolanicity.
Resumo:
RESUMO: O impacto da fase pré-analítica nos resultados é o objectivo do nosso trabalho. Considera-se que esta fase é a mais importante no procedimento do teste no laboratório e que as variáveis independentes do laboratório são difíceis de controlar porque existem várias condicionantes quer dos profissionais, quer do doente, que por vezes não possibilita atribuir a importância desejada para que os resultados sejam fiáveis. De que modo a fase pré-analítica pode interferir nos resultados de um trabalho de investigação. Para podermos avaliar este processo é necessário considerar outra condicionante, nomeadamente, o facto de o teste ser realizado em laboratórios específicos com a qualidade certificada mas que não intervêm na colheita, manuseamento, armazenamento e transporte da amostra, procedimentos importantes que traduzem uma boa gestão da amostra na fase pré-analítica, necessária para a fiabilidade dos resultados dos estudos de investigação clínica.------------------ABSTRACT: The impact of pre-analytical phase in the results is the objective of our work Knowing that, this is the most important stage in the procedure of testing in the laboratory and because the independent variables the laboratory are difficult to control because these are several limitations of both the professional and the patient, sometimes does not assign importance to that desired result are reliable. How the pre-analytical phase can interfere with the result of a research work, there is another constraint to consider the testing process to be performed in individual laboratory with certified quality, but not involved in harvesting, handing, storage and transport of the sample, important procedures that translate a good sample management in the preanalytical phase, necessary for the reliability of results of clinical research studies.
Resumo:
The continued economic and population development puts additional pressure on the already scarce energetic sources. Thus there is a growing urge to adopt a sustainable plan able to meet the present and future energetic demands. Since the last two decades, solar trough technology has been demonstrating to be a reliable alternative to fossil fuels. Currently, the trough industry seeks, by optimizing energy conversion, to drive the cost of electricity down and therefore to place itself as main player in the next energetic age. One of the issues that lately have gained considerable relevance came from the observation of significant heat losses in a large number of receiver modules. These heat losses were attributed to slow permeation of traces of hydrogen gas through the steel tube wall into the vacuum annulus. The presence of hydrogen gas in the absorber tube results from the decomposition of heat transfer fluid due to the long-term exposure to 400°C. The permeated hydrogen acts as heat conduction mean leading to a decrease in the receivers performance and thus its lifetime. In order to prevent hydrogen accumulation, it has been common practice to incorporate hydrogen getters in the vacuum annulus of the receivers. Nevertheless these materials are not only expensive but their gas absorbing capacity can be insufficient to assure the required level of vacuum for the receivers to function. In this work the building of a permeation measurement device, vulnerabilities detected in the construction process and its overcome are described. Furthermore an experimental procedure was optimized and the obtained permeability results, of different samples were evaluated. The data was compared to measurements performed by an external entity. The reliability of the comparative data was also addressed. In the end conclusions on the permeability results for the different samples characteristics, feasibility of the measurement device are drawn and recommendations on future line of work were made.
Resumo:
Phage display technology is a powerful platform for the generation of highly specific human monoclonal antibodies (Abs) with potential use in clinical applications. Moreover, this technique has also proven to be a reliable approach in identifying and validating new cancer-related targets. For scientific or medical applications, different types of Ab libraries can be constructed. The use of Fab Immune libraries allows the production of high quality and affinity antigen-specific Abs. In this work, two immune human phage display IgG Fab libraries were generated from the Ab repertoire of 16 breast cancer patients, in order to obtain a tool for the development of new therapeutic Abs for breast cancer, a condition that has great impact worldwide. The generated libraries are estimated to contain more than 108 independent clones and a diversity over 90%. Libraries validation was pursued by selection against BSA, a foreign and highly immunogenic protein, and HER2, a well established cancer target. Preliminary results suggested that phage pools with affinity for these antigens were selected and enriched. Individual clones were isolated, however, it was not possible to obtain enough data to further characterize them. Selection against the DLL1 protein was also performed, once it is a known ligand of the Notch pathway, whose deregulation is associated to breast cancer, making it an interesting target for the generation of function-blocking Abs. Selection resulted in the isolation of a clone with low affinity and Fab expression levels. The validation process was not completed and further effort will have to be put in this task in the future. Although immune libraries concept implies limited applicability, the library reported here has a wide range of use possibilities, since it was not restrained to a single antigen but instead thought to be used against any breast cancer associated target, thus being a valuable tool.
Resumo:
Human Activity Recognition systems require objective and reliable methods that can be used in the daily routine and must offer consistent results according with the performed activities. These systems are under development and offer objective and personalized support for several applications such as the healthcare area. This thesis aims to create a framework for human activities recognition based on accelerometry signals. Some new features and techniques inspired in the audio recognition methodology are introduced in this work, namely Log Scale Power Bandwidth and the Markov Models application. The Forward Feature Selection was adopted as the feature selection algorithm in order to improve the clustering performances and limit the computational demands. This method selects the most suitable set of features for activities recognition in accelerometry from a 423th dimensional feature vector. Several Machine Learning algorithms were applied to the used accelerometry databases – FCHA and PAMAP databases - and these showed promising results in activities recognition. The developed algorithm set constitutes a mighty contribution for the development of reliable evaluation methods of movement disorders for diagnosis and treatment applications.