58 resultados para Autonomous study time
Resumo:
It is difficult to get the decision about an opinion after many users get the meeting in same place. It used to spend too much time in order to find solve some problem because of the various opinions of each other. TAmI (Group Decision Making Toolkit) is the System to Group Decision in Ambient Intelligence [1]. This program was composed with IGATA [2], WebMeeting and the related Database system. But, because it is sent without any encryption in IP / Password, it can be opened to attacker. They can use the IP / Password to the bad purpose. As the result, although they make the wrong result, the joined member can’t know them. Therefore, in this paper, we studied the applying method of user’s authentication into TAmI.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores. Área de Especialização em Sistemas Autónomos
Resumo:
Como o sector cerâmico é um consumidor intensivo de energia, este trabalho teve como objectivo principal a elaboração de um plano de optimização do desempenho energético da olaria número três da Fábrica Cerâmica de Valadares. Para o efeito, efectuou-se o levantamento energético desta fracção autónoma. O valor total obtido para os ganhos térmicos foi de 8,7x107 kJ/dia, sendo 82% desta energia obtida na combustão do gás natural. Por outro lado, as perdas energéticas rondam os 8,2x107 kJ/dia, sendo o ar de exaustão e a envolvente os principais responsáveis, com um peso de 42 % e 38%, respectivamente. Tendo em conta estes valores, estudaram-se várias medidas de isolamento da cobertura, pavimento, paredes e saída de ar através de fendas do edifício. No caso do isolamento da cobertura sugeriu-se a substituição das telhas de fibrocimento e do isolamento actualmente existentes por painéis sandwich de cobertura. Esta acção permite uma poupança de 64.796€/ano, com um investimento de 57.029€ e o seu período de retorno de 0,9 anos. O Valor Actualizado Líquido (VAL) no 5º ano foi de 184.069€, com uma Taxa Interna de Rentabilidade (TIR) de 92%. Para isolar o pavimento, sugeriu-se a utilização de placas de poliuretano expandido (PU) de 20mm de espessura. Assim, consegue-se uma poupança de 7.442 €/ano, com um investimento de 21.708€, e um tempo de retorno 2,9 anos. No final do 5º ano de vida útil do projecto, o VAL é de 4.070€ e a TIR 7%. Relativamente ao isolamento das paredes e pilares, sugeriu-se a utilização de placas de PU (30mm), recobertas com chapa de ferro galvanizado. O tempo de retorno do investimento é de 1,5 anos, uma vez que, o investimento é de 13.670€ e a poupança anual será de 9.183€. Esta solução apresenta no último ano um VAL de 12.835€ e uma TIR de 22%. No isolamento das fendas do edifício, sugeriu-se a redução de 20% da sua área livre. Esta medida de optimização implica um investimento de 8.000€, revelando-se suficientemente eficaz, pois apresenta um tempo de retorno de 0,67 anos. O VAL e a TIR da solução no último ano de vida útil do projecto de investimento são de 36.835€ e 35%, respectivamente. Por fim, sugeriu-se ainda a instalação de um sistema de controlo que visa o aproveitamento de ar quente proveniente do forno, instalado no piso inferior à olaria, para pré-aquecer o ar alimentado aos geradores de calor. Esta medida implicaria um investimento de 4.000€, com um tempo de retorno de 2,4 anos e uma poupança anual é de 1.686€. O investimento é aconselhável, já que, no 5º ano, o VAL é de 1.956€ e a TIR é de 17%.
Resumo:
Os serviços baseados em localização vieram dar um novo alento à criatividade dos programadores de aplicações móveis. A vulgarização de dispositivos com capacidades de localização integradas deu origem ao desenvolvimento de aplicações que gerem e apresentam informação baseada na posição do utilizador. Desde então, o mercado móvel tem assistido ao aparecimento de novas categorias de aplicações que tiram proveito desta capacidade. Entre elas, destaca-se a monitorização remota de dispositivos, que tem vindo a assumir uma importância crescente, tanto no sector particular como no sector empresarial. Esta dissertação começa por apresentar o estado da arte sobre os diferentes sistemas de posicionamento, categorizados pela sua eficácia em ambientes internos ou externos, assim como diferentes protocolos de comunicação em tempo quase-real. É também feita uma análise ao estado actual do mercado móvel. Actualmente o mercado possui diferentes plataformas móveis com características únicas que as fazem rivalizar entre si, com vista a expandirem a sua quota de mercado. É por isso elaborado um breve estudo sobre os sistemas operativos móveis mais relevantes da actualidade. É igualmente feita uma abordagem mais profunda à arquitectura da plataforma móvel da Apple - o iOS – que serviu de base ao desenvolvimento de uma solução optimizada para localização e monitorização de dispositivos móveis. A monitorização implica uma utilização intensiva de recursos energéticos e de largura de banda que os dispositivos móveis da actualidade não estão aptos a suportar. Dado o grande consumo energético do GPS face à precária autonomia destes dispositivos, é apresentado um estudo em que se expõem soluções que permitem gerir de forma optimizada a utilização do GPS. O elevado custo dos planos de dados facultados pelas operadoras móveis é também considerado, pelo que são exploradas soluções que visam minimizar a utilização de largura de banda. Deste trabalho, nasce a aplicação EyeGotcha, que para além de permitir localizar outros utilizadores de dispositivos móveis de forma optimizada, permite também monitorizar as suas acções baseando-se num conjunto de regras pré-definidas. Estas acções são reportadas às entidades monitoras, de modo automatizado e sob a forma de alertas. Visionando-se a comercialização da aplicação, é portanto apresentado um modelo de negócio que permite obter receitas capazes de cobrirem os custos de manutenção de serviços, aos quais o funcionamento da aplicação móvel está subjugado.
Resumo:
A navegação de veículos autónomos em ambientes não estruturados continua a ser um problema em aberto. A complexidade do mundo real ainda é um desafio. A difícil caracterização do relevo irregular, dos objectos dinâmicos e pouco distintos(e a inexistência de referências de localização) tem sido alvo de estudo e do desenvolvimento de vários métodos que permitam de uma forma eficiente, e em tempo real, modelizar o espaço tridimensional. O trabalho realizado ao longo desta dissertação insere-se na estratégia do Laboratório de Sistemas Autónomos (LSA) na pesquisa e desenvolvimento de sistemas sensoriais que possibilitem o aumento da capacidade de percepção das plataformas robóticas. O desenvolvimento de um sistema de modelização tridimensional visa acrescentar aos projectos LINCE (Land INtelligent Cooperative Explorer) e TIGRE (Terrestrial Intelligent General proposed Robot Explorer) maior autonomia e capacidade de exploração e mapeamento. Apresentamos alguns sensores utilizados para a aquisição de modelos tridimensionais, bem como alguns dos métodos mais utilizados para o processo de mapeamento, e a sua aplicação em plataformas robóticas. Ao longo desta dissertação são apresentadas e validadas técnicas que permitem a obtenção de modelos tridimensionais. É abordado o problema de analisar a cor e geometria dos objectos, e da criação de modelos realistas que os representam. Desenvolvemos um sistema que nos permite a obtenção de dados volumétricos tridimensionais, a partir de múltiplas leituras de um Laser Range Finder bidimensional de médio alcance. Aos conjuntos de dados resultantes associamos numa nuvem de pontos coerente e referenciada. Foram desenvolvidas e implementadas técnicas de segmentação que permitem inspeccionar uma nuvem de pontos e classifica-la quanto às suas características geométricas, bem como ao tipo de estruturas que representem. São apresentadas algumas técnicas para a criação de Mapas de Elevação Digital, tendo sido desenvolvida um novo método que tira partido da segmentação efectuada
Resumo:
Solvent extraction is considered as a multi-criteria optimization problem, since several chemical species with similar extraction kinetic properties are frequently present in the aqueous phase and the selective extraction is not practicable. This optimization, applied to mixer–settler units, considers the best parameters and operating conditions, as well as the best structure or process flow-sheet. Global process optimization is performed for a specific flow-sheet and a comparison of Pareto curves for different flow-sheets is made. The positive weight sum approach linked to the sequential quadratic programming method is used to obtain the Pareto set. In all investigated structures, recovery increases with hold-up, residence time and agitation speed, while the purity has an opposite behaviour. For the same treatment capacity, counter-current arrangements are shown to promote recovery without significant impairment in purity. Recycling the aqueous phase is shown to be irrelevant, but organic recycling with as many stages as economically feasible clearly improves the design criteria and reduces the most efficient organic flow-rate.
Resumo:
We report the results of a study of the sulphurization time effects on Cu2ZnSnS4 absorbers and thin film solar cells prepared from dc-sputtered tackedmetallic precursors. Three different time intervals, 10 min, 30min and 60 min, at maximum sulphurization temperature were considered. The effects of this parameter' change were studied both on the absorber layer properties and on the final solar cell performance. The composition, structure, morphology and thicknesses of the CZTS layers were analyzed. The electrical characterization of the absorber layer was carried out by measuring the transversal electrical resistance of the samples as a function of temperature. This study shows an increase of the conductivity activation energy from 10 meV to 54meV for increasing sulphurization time from 10min to 60min. The solar cells were built with the following structure: SLG/Mo/CZTS/CdS/i-ZnO/ZnO:Al/Ni:Al grid. Several ac response equivalent circuit models were tested to fit impedance measurements. The best results were used to extract the device series and shunt resistances and capacitances. Absorber layer's electronic properties were also determined using the Mott–Schottky method. The results show a decrease of the average acceptor doping density and built-in voltage, from 2.0 1017 cm−3 to 6.5 1015 cm−3 and from 0.71 V to 0.51 V, respectively, with increasing sulphurization time. These results also show an increase of the depletion region width from approximately 90 nm–250 nm.
Resumo:
The development of neonatal intensive care has led to an increase in the prevalence of children with low birth weight and associated morbidity. The objectives of this study are to verify (1) The association between birth weight (BW) and neuromotor performance? (2) Is the neuromotor performance of twins within the normal range? (3) Are intra-pair similarities in neuromotor development of Monozygotic (MZ) and Disygotic (DZ) twins of unequal magnitude? The sample consisted of 191 children (78 MZ and 113 DZ), 8.9+3.1 years of age and with an average BW of 2246.3+485.4g. In addition to gestational characteristics, sports participation and Zurich Neuromotor Assessment (ZNA) were observed at childhood age. The statistical analysis was carried out with software SPSS 18.0, the STATA 10 and the ZNA performance scores. The level of significance was 0.05. For the neuromotor items high intra and inter-investigator reliabilities were obtained (0.793
Resumo:
TLE in infancy has been the subject of varied research. Topographical and structural evidence is coincident with the neuronal systems responsible for auditory processing of the highest specialization and complexity. Recent studies have been showing the need of a hemispheric asymmetry for an optimization in central auditory processing (CAP) and acquisition and learning of a language system. A new functional research paradigm is required to study mental processes that require methods of cognitive-sensory information analysis processed in very short periods of time (msec), such as the ERPs. Thus, in this article, we hypothesize that the TLE in infancy could be a good model for topographic and functional study of CAP and its development process, contributing to a better understanding of the learning difficulties that children with this neurological disorder have.
Resumo:
Advances in networking and information technologies are transforming factory-floor communication systems into a mainstream activity within industrial automation. It is now recognized that future industrial computer systems will be intimately tied to real-time computing and to communication technologies. For this vision to succeed, complex heterogeneous factory-floor communication networks (including mobile/wireless components) need to function in a predictable, flawless, efficient and interoperable way. In this paper we re-visit the issue of supporting real-time communications in hybrid wired/wireless fieldbus-based networks, bringing into it some experimental results obtained in the framework of the RFieldbus ISEP pilot.
Resumo:
Robotics research in Portugal is increasing every year, but few students embrace it as one of their first choices for study. Until recently, job offers for engineers were plentiful, and those looking for a degree in science and technology would avoid areas considered to be demanding, like robotics. At the undergraduate level, robotics programs are still competing for a place in the classical engineering graduate curricula. Innovative and dynamic Master’s programs may offer the solution to this gap. The Master’s degree in autonomous systems at the Instituto Superior de Engenharia do Porto (ISEP), Porto, Portugal, was designed to provide a solid training in robotics and has been showing interesting results, mainly due to differences in course structure and the context in which students are welcomed to study and work
Resumo:
This paper provides a comprehensive study on how to use Profibus fieldbus networks to support real-time industrial communications, that is, on how to ensure the transmission of real-time messages within a maximum bound time. Profibus is base on a simplified timed token (TT) protocol, which is a well-proved solution for real-time communication systems. However, Profibus differs with respect to the TT protocol, thus preventing the application of the usual TT protocol real-time analysis. In fact, real-time solutions for networks based on the TT protocol rely on the possibility of allocating specific bandwidth for the real-time traffic. This means that a minimum amount of time is always available, at each token visit, to transmit real-time messages, transversely, with the Profibus protocol, in the worst case, only one real-time message is processed per token visit. The authors propose two approaches to guarantee the real-time behavior of the Profibus protocol: (1) an unconstrained low-priority traffic profile; and (2) a constrained low-priority traffic profile. The proposed analysis shows that the first profile is a suitable approach for more responsive systems (tighter deadlines), while the second allows for increased nonreal-time traffic throughput
Resumo:
This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.