936 resultados para Data Acquisition Methods.
Resumo:
A satisfação das necessidades energéticas mundiais, cada vez mais exigentes, bem como a necessidade urgente de procurar caminhos que permitam usufruir de energia, da forma menos poluente possível, levam à necessidade de serem explorados caminhos que permitam cumprir estes pressupostos. A escolha da utilização das energias renováveis na produção de energia, torna-se cada vez mais interessante, quer do ponto de vista ambiental quer económico. O fundamento da lógica difusa está associado à recolha de informações vagas, que são no fundo uma linguagem falada por seres humanos, possibilitando a passagem deste tipo de linguagem para formato numérico, permitindo assim uma manipulação computacional. Elementos climáticos como o sol e o vento, podem ser descritos em forma de variáveis linguísticas, como é o caso de vento forte, temperatura baixa, irradiação fraca, etc. Isto faz com que a aplicação de um controlo a partir destes fenómenos, justifique ser realizado com recurso a sistemas de inferência difusa. Para a realização do trabalho proposto, foram consumados estudos relativos às energias renováveis, com particular enfoque na solar e na eólica. Também foi realizado um estudo dos conceitos pertencentes à lógica difusa e a sistemas de inferência difusa com o objetivo de perceber os diversos parâmetros constituintes desta matéria. Foi realizado o estudo e desenvolvimento de um sistema de aquisição de dados, bem como do controlador difuso que é o busílis do trabalho descrito neste relatório. Para tal, o trabalho foi efetuado com o recurso ao software MATLAB, a partir do qual foram desenvolvidas aplicações que possibilitaram a obtenção de dados climáticos, com vista à sua utilização na toolbox Fuzzy Logic a qual foi utilizada para o desenvolvimento de todo o algoritmo de controlo. Com a possibilidade de aquisição de dados concluída e das variáveis que iriam ser necessárias definidas, foi implementado o controlador difuso que foi sendo sintonizado ao longo do trabalho por forma a garantir os melhores resultados possíveis. Com o recurso à ferramenta Guide, também do MATLAB, foi criada a interface do sistema com o utilizador, sendo possível a averiguação da energia a ser produzida, bem como das contribuições de cada uma das fontes de energia renováveis para a obtenção dessa mesma energia. Por último, foi feita uma análise de resultados através da comparação entre os valores reais esperados e os valores obtidos pelo controlador difuso, bem como assinaladas conclusões e possibilidades de desenvolvimentos futuros deste trabalho.
Resumo:
O acolhimento familiar oferece um contexto de vida à criança retirada da sua família biológica, por um período de tempo indeterminado que se pode prolongar, no limite, até à maioridade ou independência. Um contexto familiar estável permite desenvolver sentimentos de segurança e de permanência associados à possibilidade de manutenção dos contactos com a sua família biológica. A criança pode e deve, em muitas circunstâncias, permanecer com os seus acolhedores e o reconhecimento deste papel parental é um passo que pode contribuir para afastar ambiguidades e indefinições que são prejudiciais para o sistema e para as práticas que ele vai configurando. Em Portugal, todavia, o acolhimento familiar é umamedida de carácter temporário, cuja aplicação depende da previsibilidade do retorno da criança ou do jovem à família de origem. O objetivo deste artigo é, após uma breve caracterização do sistema de proteção de crianças e jovens português, analisar a permanência no acolhimento familiar de 2006 a 2011, a partir dos relatórios de caracterização das crianças e jovens emsituação de acolhimento. De seguida procedemos à apresentação e discussão de dados recolhidos num estudo desenvolvido no distrito do Porto, englobando as 289 crianças que se encontravamacolhidas emmaio de 2011, e que representavam52%das colocações familiares de crianças em Portugal.Os resultados foram apurados com a aplicação de um formulário de recolha de dados preenchido a partir dos registos oficiais de cada criança acolhida e através da realização de 52 entrevistas a acolhedores. Entre os resultados principais destacam-se os longos períodos de estadia, a permanência da criança na família acolhedora inicial e a avaliação global positiva dos resultados obtidos, o que nos permite identificar um conjunto de desafios que se colocam no futuro imediato ao acolhimento familiar português.
Resumo:
The authors analyzed 704 transthoracic echocardiographic (TTE) examinations, performed routinely to all admitted patients to a general 16-bed Intensive Care Unit (ICU) during an 18-month period. Data acquisition and prevalence of abnormalities of cardiac structures and function were assessed, as well as the new, previously unknown severe diagnoses. A TTE was performed within the first 24 h of admission on 704 consecutive patients, with a mean age of 61.5+/-17.5 years, ICU stay of 10.6+/-17.1 days, APACHE II 22.6+/-8.9, and SAPS II 52.7+/-20.4. In four patients, TTE could not be performed. Left ventricular (LV) dimensions were quantified in 689 (97.8%) patients, and LV function in 670 (95.2%) patients. Cardiac output (CO) was determined in 610 (86.7%), and mitral E/A in 399 (85.9% of patients in sinus rhythm). Echocardiographic abnormalities were detected in 234 (33%) patients, the most common being left atrial (LA) enlargement (n=163), and LV dysfunction (n=132). Patients with these alterations were older (66+/-16.5 vs 58.1+/-17.4, p<0.001), presented a higher APACHE II score (24.4+/-8.7 vs 21.1+/-8.9, p<0.001), and had a higher mortality rate (40.1% vs 25.4%, p<0.001). Severe, previously unknown echocardiographic diagnoses were detected in 53 (7.5%) patients; the most frequent condition was severe LV dysfunction. Through a multivariate logistic regression analysis, it was determined that mortality was affected by tricuspid regurgitation (p=0.016, CI 1.007-1.016) and ICU stay (p<0.001, CI 1-1.019). We conclude that TTE can detect most cardiac structures in a general ICU. One-third of the patients studied presented cardiac structural or functional alterations and 7.5% severe previously unknown diagnoses.
Resumo:
A presente dissertação foi desenvolvida nas instalações do laboratório de massa da extinta Direção Regional da Economia do Norte (DRE-Norte), no âmbito da unidade curricular de Dissertação/Projeto/Estágio Profissional (DPEPR) do curso de Mestrado de Instrumentação e Metrologia (MEIM) do Instituto Superior de Engenharia do Porto (ISEP). O laboratório de massa da DRE-Norte dedicava-se, entre outras atividades, à calibração de pesos das classes de exatidão F1 e inferior, definidas na recomendação R111-1:2004 da Organização Internacional de Metrologia Legal (OIML), para valores nominais desde 1 mg até 1 000 kg. Foi implementado um sistema real de calibração de pesos utilizando dois comparadores de massa, com interface de comunicação de dados, com fios e sem fios, para o PC (Personal Computer), com aquisição automática dos dados. Desenvolveu-se uma aplicação informática em Visual Basic com o objetivo de ligar os dois comparadores de massa ao PC e automatizar o processo de calibração de pesos com base no método ABBA. O software desenvolvido foi aplicado num caso real de calibração de pesos em ambiente de intercomparação laboratorial com a participação de sete laboratórios nacionais, entre os quais, o laboratório de massa da DRE-Norte.
Resumo:
A dissertação em apreço resultou da necessidade em otimizar os recursos técnicos, mas sobretudo humanos, afetos às verificações de instrumentos de medição, no âmbito do Controlo Metrológico Legal. Estas verificações, realizadas nos termos do cumprimento das competências outrora atribuídas à Direção de Serviços da Qualidade da então Direção Regional da Economia do Norte, eram operacionalizadas pela Divisão da Qualidade e Licenciamento, na altura dirigida pelo subscritor da presente tese, nomeadamente no que respeita aos ensaios efetuados, em laboratório, a manómetros analógicos. O objetivo principal do trabalho foi alcançado mediante o desenvolvimento de um automatismo, materializado pela construção de um protótipo, cuja aplicação ao comparador de pressão múltiplo, dantes em utilização, permitiria realizar a leitura da indicação de cada manómetro analógico através de técnicas de processamento de imagem, função esta tradicionalmente efetuada manualmente por um operador especializado. As metodologias de comando, controlo e medição desse automatismo foram realizadas através de um algoritmo implementado no software LabVIEW® da National Intruments, particularmente no que respeita ao referido processamento das imagens adquiridas por uma câmara de vídeo USB. A interface com o hardware foi concretizada recorrendo a um módulo de Aquisição de Dados Multifuncional (DAQ) USB-6212, do mesmo fabricante. Para o posicionamento horizontal e vertical da câmara de vídeo USB, recorreu-se a guias lineares acionadas por motores de passo, sendo que estes dispositivos foram igualmente empregues no acionamento do comparador de pressão. Por último, procedeu-se à aquisição digital da leitura do padrão, recorrendo à respetiva virtualização, bem como a uma aplicação desenvolvida neste projeto, designada appMAN, destinada à gestão global do referido automatismo, nomeadamente no que se refere ao cálculo do resultado da medição, erro e incerteza associada, e emissão dos respetivos documentos comprovativos.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Oral busulfan is the historical backbone of the busulfan+cyclophosphamide regimen for autologous stem cell transplantation. However intravenous busulfan has more predictable pharmacokinetics and less toxicity than oral busulfan; we, therefore, retrospectively analyzed data from 952 patients with acute myeloid leukemia who received intravenous busulfan for autologous stem cell transplantation. Most patients were male (n=531, 56%), and the median age at transplantation was 50.5 years. Two-year overall survival, leukemia-free survival, and relapse incidence were 67±2%, 53±2%, and 40±2%, respectively. The non-relapse mortality rate at 2 years was 7±1%. Five patients died from veno-occlusive disease. Overall leukemia-free survival and relapse incidence at 2 years did not differ significantly between the 815 patients transplanted in first complete remission (52±2% and 40±2%, respectively) and the 137 patients transplanted in second complete remission (58±5% and 35±5%, respectively). Cytogenetic risk classification and age were significant prognostic factors: the 2-year leukemia-free survival was 63±4% in patients with good risk cytogenetics, 52±3% in those with intermediate risk cytogenetics, and 37 ± 10% in those with poor risk cytogenetics (P=0.01); patients ≤50 years old had better overall survival (77±2% versus 56±3%; P<0.001), leukemia-free survival (61±3% versus 45±3%; P<0.001), relapse incidence (35±2% versus 45±3%; P<0.005), and non-relapse mortality (4±1% versus 10±2%; P<0.001) than older patients. The combination of intravenous busulfan and high-dose melphalan was associated with the best overall survival (75±4%). Our results suggest that the use of intravenous busulfan simplifies the autograft procedure and confirm the usefulness of autologous stem cell transplantation in acute myeloid leukemia. As in allogeneic transplantation, veno-occlusive disease is an uncommon complication after an autograft using intravenous busulfan.
Resumo:
Thesis submitted in fulfilment of the requirements for the Degree of Master of Science in Computer Science
Resumo:
Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.
Resumo:
The “CMS Safety Closing Sensors System” (SCSS, or CSS for brevity) is a remote monitoring system design to control safety clearance and tight mechanical movements of parts of the CMS detector, especially during CMS assembly phases. We present the different systems that makes SCSS: its sensor technologies, the readout system, the data acquisition and control software. We also report on calibration and installation details, which determine the resolution and limits of the system. We present as well our experience from the operation of the system and the analysis of the data collected since 2008. Special emphasis is given to study positioning reproducibility during detector assembly and understanding how the magnetic fields influence the detector structure.
Resumo:
The purpose of this study is to explore the humorous side of television advertisement and its impact on Portuguese consumers’ hearts, minds and wallets. Both qualitative (through in-depth interviews) and quantitative (through an on-line survey and subsequent statistical data analysis) methods were used, guaranteeing a more consistent, strong and valid research. Twenty-five interviews with randomly chosen consumers were conducted face-to-face and three interviews via e-mail with marketers and television advertisers were performed in order to explore profoundly the subject. Moreover, 360 people have answered the on-line survey. Through the analysis of the data collected humor perception was found to be positively correlated with persuasion and intention to purchase the product; intention to share the advert; message comprehension; product liking and development of positive feelings towards the brand and brand credibility variables. The main implication of these findings relies on the fact that humor in advertising is able to boost its effectiveness.
Resumo:
In the following text I will develop three major aspects. The first is to draw attention to those who seem to have been the disciplinary fields where, despite everything, the Digital Humanities (in the broad perspective as will be regarded here) have asserted themselves in a more comprehensive manner. I think it is here that I run into greater risks, not only for what I have mentioned above, but certainly because a significant part, perhaps, of the achievements and of the researchers might have escaped the look that I sought to cast upon the past few decades, always influenced by my own experience and the work carried out in the field of History. But this can be considered as a work in progress and it is open to criticism and suggestions. A second point to note is that emphasis will be given to the main lines of development in the relationship between historical research and digital methodologies, resources and tools. Finally, I will try to make a brief analysis of what has been the Digital Humanities discourse appropriation in recent years, with very debatable data and methods for sure, because studies are still scarce and little systematic information is available that would allow to go beyond an introductory reflection.