921 resultados para automated perimetry
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Ion Mobility Spectrometry coupled with Multi Capillary Columns (MCC -IMS) is a fast analytical technique working at atmospheric pressure with high sensitivity and selectivity making it suitable for the analysis of complex biological matrices. MCC-IMS analysis generates its information through a 3D spectrum with peaks, corresponding to each of the substances detected, providing quantitative and qualitative information. Sometimes peaks of different substances overlap, making the quantification of substances present in the biological matrices a difficult process. In the present work we use peaks of isoprene and acetone as a model for this problem. These two volatile organic compounds (VOCs) that when detected by MCC-IMS produce two overlapping peaks. In this work it’s proposed an algorithm to identify and quantify these two peaks. This algorithm uses image processing techniques to treat the spectra and to detect the position of the peaks, and then fits the data to a custom model in order to separate the peaks. Once the peaks are separated it calculates the contribution of each peak to the data.
Resumo:
INTRODUCTION: Enterobacteriaceae strains are a leading cause of bloodstream infections (BSI). The aim of this study is to assess differences in clinical outcomes of patients with BSI caused by Enterobacteriaceae strains before and after introduction of an automated microbiologic system by the microbiology laboratory. METHODS: We conducted a retrospective cohort study aimed to evaluate the impact of the introduction of an automated microbiologic system (Phoenix(tm) automated microbiology system, Becton, Dickinson and Company (BD) - Diagnostic Systems, Sparks, MD, USA) on the outcomes of BSIs caused by Enterobacteriaceae strains. The study was undertaken at Hospital São Paulo, a 750-bed teaching hospital in São Paulo, Brazil. Patients with BSI caused by Enterobacteriaceae strains before the introduction of the automated system were compared with patients with BSI caused by the same pathogens after the introduction of the automated system with regard to treatment adequacy, clinical cure/improvement and 14- and 28-day mortality rates. RESULTS: We evaluated 90 and 106 patients in the non-automated and automated testing periods, respectively. The most prevalent species in both periods were Klebsiella spp. and Proteus spp. Clinical cure/improvement occurred in 70% and 67.9% in non-automated and automated period, respectively (p=0.75). 14-day mortality rates were 22.2% and 30% (p=0.94) and 28-day mortality rates were 24.5% and 40.5% (p= 0.12). There were no significant differences between the two testing periods with regard to treatment adequacy, clinical cure/improvement and 14- and 28-day mortality rates. CONCLUSIONS: Introduction of the BD Phoenix(tm) automated microbiology system did not impact the clinical outcomes of BSIs caused by Enterobacteriaceae strains in our setting.
Resumo:
The aim of this study was to evaluated the efficacy of the Old Way/New Way methodology (Lyndon, 1989/2000) with regard to the permanent correction of a consolidated and automated technical error experienced by a tennis athlete (who is 18 years old and has been engaged in practice mode for about 6 years) in the execution of serves. Additionally, the study assessed the impact of intervention on the athlete’s psychological skills. An individualized intervention was designed using strategies that aimed to produce a) a detailed analysis of the error using video images; b) an increased kinaesthetic awareness; c) a reactivation of memory error; d) the discrimination and generalization of the correct motor action. The athlete’s psychological skills were measured with a Portuguese version of the Psychological Skills Inventory for Sports (Cruz & Viana, 1993). After the intervention, the technical error was corrected with great efficacy and an increase in the athlete’s psychological skills was verified. This study demonstrates the methodology’s efficacy, which is consistent with the effects of this type of intervention in different contexts.
Resumo:
Wireless Sensor Networks(WSN) are networks of devices used to sense and act that applies wireless radios to communicate. To achieve a successful implementation of a wireless device it is necessary to take in consideration the existence of a wide variety of radios available, a large number of communication parameters (payload, duty cycle, etc.) and environmental conditions that may affect the device’s behaviour. However, to evaluate a specific radio towards a unique application it might be necessary to conduct trial experiments, with such a vast amount of devices, communication parameters and environmental conditions to take into consideration the number of trial cases generated can be surprisingly high. Thus, making trial experiments to achieve manual validation of wireless communication technologies becomes unsuitable due to the existence of a high number of trial cases on the field. To overcome this technological issue an automated test methodology was introduced, presenting the possibility to acquire data regarding the device’s behaviour when testing several technologies and parameters that care for a specific analysis. Therefore, this method advances the validation and analysis process of the wireless radios and allows the validation to be done without the need of specific and in depth knowledge about wireless devices.
Resumo:
IP networks are currently the major communication infrastructure used by an increasing number of applications and heterogeneous services, including voice services. In this context, the Session Initiation Protocol (SIP) is a signaling protocol widely used for controlling multimedia communication sessions such as voice or video calls over IP networks, thus performing vital functions in an extensive set of public and enter- prise solutions. However, the SIP protocol dissemination also entails some challenges, such as the complexity associated with the testing/validation processes of IMS/SIP networks. As a consequence, manual IMS/SIP testing solutions are inherently costly and time consuming tasks, being crucial to develop automated approaches in this specific area. In this perspective, this article presents an experimental approach for automated testing/validation of SIP scenarios in IMS networks. For that purpose, an automation framework is proposed allowing to replicate the configuration of SIP equipment from the pro- duction network and submit such equipment to a battery of tests in the testing network. The proposed solution allows to drastically reduce the test and validation times when compared with traditional manual approaches, also allowing to enhance testing reliability and coverage. The automation framework comprises of some freely available tools which are conveniently integrated with other specific modules implemented within the context of this work. In order to illustrate the advantages of the proposed automated framework, a real case study taken from a PT Inovação customer is presented comparing the time required to perform a manual SIP testing approach with the one time required when using the proposed auto- mated framework. The presented results clearly corroborate the advantages of using the presented framework.
Resumo:
Peer-to-Peer (P2P) is nowadays a widely used paradigm underpinning the deployment of several Internet services and applications. However, the management of P2P traffic aggregates is not an easy task for Internet Service Providers (ISPs). In this perspective, and considering an expectable proliferation in the use of such ap- plications, future networks require the development of smart mechanisms fostering an easier coexistence between P2P applications and ISP infrastructures. This paper aims to contribute for such research efforts presenting a framework incorporating useful mechanisms to be activated by network administrators, being also able to operate as an automated management tool dealing with P2P traffic aggregates.
Resumo:
This paper presents an automated optimization framework able to provide network administrators with resilient routing configurations for link-state protocols, such as OSPF or IS-IS. In order to deal with the formulated NP-hard optimization problems, the devised framework is underpinned by the use of computational in- telligence optimization engines, such as Multi-objective Evolutionary Algorithms (MOEAs). With the objective of demonstrating the framework capabilities, two il- lustrative Traffic Engineering methods are described, allowing to attain routing con- figurations robust to changes in the traffic demands and maintaining the network stable even in the presence of link failure events. The presented illustrative results clearly corroborate the usefulness of the proposed automated framework along with the devised optimization methods.
Resumo:
OBJECTIVE: To make individual assessments using automated quantification methodology in order to screen for perfusion abnormalities in cerebral SPECT examinations among a sample of subjects with OCD. METHODS: Statistical parametric mapping (SPM) was used to compare 26 brain SPECT images from patients with OCD individually with an image bank of 32 normal subjects, using the statistical threshold of p < 0.05 (corrected for multiple comparisons at the level of individual voxels or clusters). The maps were analyzed, and regions presenting voxels that remained above this threshold were sought. RESULTS: Six patients from a sample of 26 OCD images showed abnormalities at cluster or voxel level, considering the criteria described above, which represented 23.07%. However, seven images from the normal group of 32 were also indicated as cases of perfusional abnormality, representing 21.8% of the sample. CONCLUSION: The automated quantification method was not considered to be a useful tool for clinical practice, for analyses complementary to visual inspection.
Resumo:
This paper presents an automated optimization framework able to provide network administrators with resilient routing configurations for link-state protocols, such as OSPF or IS-IS. In order to deal with the formulated NP-hard optimization problems, the devised framework is underpinned by the use of computational intelligence optimization engines, such as Multi-objective Evolutionary Algorithms (MOEAs). With the objective of demonstrating the framework capabilities, two illustrative Traffic Engineering methods are described, allowing to attain routing configurations robust to changes in the traffic demands and maintaining the network stable even in the presence of link failure events. The presented illustrative results clearly corroborate the usefulness of the proposed automated framework along with the devised optimization methods.
Resumo:
After the incorporation of automated external defibrilators by other airlines and the support of the Brazilian Society of cardiology, Varig Airlines Began the onboard defibrilation program with the initial purpose of equiping wide-body aircrafts frequently used in international flights and that airplanes use in the Rio - São Paulo route. With all fight attendants trained, the automated. External defibrilation devides were incorporated to 34 airplanes of a total pleet of 80 aircrats. The devices were intalled in the bagage compartments secured with velero straps and 2 pairs of electrods, one or which pre-conected to the device to minimize application time. Later, a portable monitor was addres to the ressocitation kit in the long flights. The expansion of the knowledge of the basic life support fundamentors and the correted implantation of the survival chain and of the automated external defibrilators will increase the extense of recovery of cardiorespiratory arrest victins in aircrafts.
Resumo:
OBJECTIVE - The aim of our study was to assess the profile of a wrist monitor, the Omron Model HEM-608, compared with the indirect method for blood pressure measurement. METHODS - Our study population consisted of 100 subjects, 29 being normotensive and 71 being hypertensive. Participants had their blood pressure checked 8 times with alternate techniques, 4 by the indirect method and 4 with the Omron wrist monitor. The validation criteria used to test this device were based on the internationally recognized protocols. RESULTS - Our data showed that the Omron HEM-608 reached a classification B for systolic and A for diastolic blood pressure, according to the one protocol. The mean differences between blood pressure values obtained with each of the methods were -2.3 +7.9mmHg for systolic and 0.97+5.5mmHg for diastolic blood pressure. Therefore, we considered this type of device approved according to the criteria selected. CONCLUSION - Our study leads us to conclude that this wrist monitor is not only easy to use, but also produces results very similar to those obtained by the standard indirect method.
Resumo:
OBJECTIVE: To assess the Dixtal DX2710 automated oscillometric device used for blood pressure measurement according to the protocols of the BHS and the AAMI. METHODS: Three blood pressure measurements were taken in 94 patients (53 females 15 to 80 years). The measurements were taken randomly by 2 observers trained to measure blood pressure with a mercury column device connected with an automated device. The device was classified according to the protocols of the BHS and AAMI. RESULT: The mean of blood pressure levels obtained by the observers was 148±38/93±25 mmHg and that obtained with the device was 148±37/89±26 mmHg. Considering the differences between the measurements obtained by the observer and those obtained with the automated device according to the criteria of the BHS, the following classification was adopted: "A" for systolic pressure (69% of the differences < 5; 90% < 10; and 97% < 15 mmHg); and "B" for diastolic pressure (63% of the differences < 5; 83% < 10; and 93% < 15 mmHg). The mean and standard deviation of the differences were 0±6.27 mmHg for systolic pressure and 3.82±6.21 mmHg for diastolic pressure. CONCLUSION: The Dixtal DX2710 device was approved according to the international recommendations.
Resumo:
Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.
Resumo:
Seismic analysis, horizon matching, fault tracking, marked point process,stochastic annealing