14 resultados para Science Ability testing
em Universidad Politécnica de Madrid
Resumo:
Objectives The study sought to evaluate the ability of cardiac magnetic resonance (CMR) to monitor acute and long-term changes in pulmonary vascular resistance (PVR) noninvasively. Background PVR monitoring during the follow-up of patients with pulmonary hypertension (PH) and the response to vasodilator testing require invasive right heart catheterization. Methods An experimental study in pigs was designed to evaluate the ability of CMR to monitor: 1) an acute increase in PVR generated by acute pulmonary embolization (n = 10); 2) serial changes in PVR in chronic PH (n = 22); and 3) changes in PVR during vasodilator testing in chronic PH (n = 10). CMR studies were performed with simultaneous hemodynamic assessment using a CMR-compatible Swan-Ganz catheter. Average flow velocity in the main pulmonary artery (PA) was quantified with phase contrast imaging. Pearson correlation and mixed model analysis were used to correlate changes in PVR with changes in CMR-quantified PA velocity. Additionally, PVR was estimated from CMR data (PA velocity and right ventricular ejection fraction) using a formula previously validated. Results Changes in PA velocity strongly and inversely correlated with acute increases in PVR induced by pulmonary embolization (r = –0.92), serial PVR fluctuations in chronic PH (r = –0.89), and acute reductions during vasodilator testing (r = –0.89, p ≤ 0.01 for all). CMR-estimated PVR showed adequate agreement with invasive PVR (mean bias –1.1 Wood units,; 95% confidence interval: –5.9 to 3.7) and changes in both indices correlated strongly (r = 0.86, p < 0.01). Conclusions CMR allows for noninvasive monitoring of acute and chronic changes in PVR in PH. This capability may be valuable in the evaluation and follow-up of patients with PH.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
The number of online real-time streaming services deployed over network topologies like P2P or centralized ones has remarkably increased in the recent years. This has revealed the lack of networks that are well prepared to respond to this kind of traffic. A hybrid distribution network can be an efficient solution for real-time streaming services. This paper contains the experimental results of streaming distribution in a hybrid architecture that consist of mixed connections among P2P and Cloud nodes that can interoperate together. We have chosen to represent the P2P nodes as Planet Lab machines over the world and the cloud nodes using a Cloud provider's network. First we present an experimental validation of the Cloud infrastructure's ability to distribute streaming sessions with respect to some key streaming QoS parameters: jitter, throughput and packet losses. Next we show the results obtained from different test scenarios, when a hybrid distribution network is used. The scenarios measure the improvement of the multimedia QoS parameters, when nodes in the streaming distribution network (located in different continents) are gradually moved into the Cloud provider infrastructure. The overall conclusion is that the QoS of a streaming service can be efficiently improved, unlike in traditional P2P systems and CDN, by deploying a hybrid streaming architecture. This enhancement can be obtained by strategic placing of certain distribution network nodes into the Cloud provider infrastructure, taking advantage of the reduced packet loss and low latency that exists among its datacenters.
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
Due to the particular characteristics of the fusion products, i.e. very short pulses (less than a few μs long for ions when arriving to the walls; less than 1 ns long for X-rays), very high fluences ( 10 13 particles/cm 2 for both ions and X rays photons) and broad particle energy spectra (up to 10 MeV ions and 100 keV photons), the laser fusion community lacks of facilities to accurately test plasma facing materials under those conditions. In the present work, the ability of ultraintese lasers to create short pulses of energetic particles and high fluences is addressed as a solution to reproduce those ion and X-ray bursts. Based on those parameters, a comparison between fusion ion and laser driven ion beams is presented and discussed, describing a possible experimental set-up to generate with lasers the appropriate ion pulses. At the same time, the possibility of generating X-ray or neutron beams which simulate those of laser fusion environments is also indicated and assessed under current laser intensities. It is concluded that ultraintense lasers should play a relevant role in the validation of materials for laser fusion facilities.
Resumo:
The ability of ultraintese lasers to create short pulses of energetic particles and high fluences is addressed as a solution to reproduce ion and X-ray ICF bursts for the characterization and validation of plasma facing components. The possibility of using a laser neutron source for material testing will also be discussed.
Resumo:
Based on laser beam intensities above 109 W/cm2 with pulse energy of several Joules and duration of nanoseconds, Laser Shock Processing (LSP) is capable of inducing a surface compressive residual stress field. The paper presents experimental results showing the ability of LSP to improve the mechanical strength and cracking resistance of AA2024-T351 friction stir welded (FSW) joints. After introducing the FSW and LSP procedures, the results of microstructural analysis and micro-hardness are discussed. Video Image Correlation was used to measure the displacement and strain fields produced during tensile testing of flat specimens; the local and overall tensile behavior of native FSW joints vs. LSP treated were analyzed. Further, results of slow strain rate tensile testing of the FSW joints, native and LSP treated, performed in 3.5% NaCl solution are presented. The ability of LSP to improve the structural behavior of the FSW joints is underscored.
Resumo:
The aim of this work is to test the present status of Evaluated Nuclear Decay and Fission Yield Data Libraries to predict decay heat and delayed neutron emission rate, average neutron energy and neutron delayed spectra after a neutron fission pulse. Calculations are performed with JEFF-3.1.1 and ENDF/B-VII.1, and these are compared with experimental values. An uncertainty propagation assessment of the current nuclear data uncertainties is performed.
Resumo:
Thin film photovoltaic (TF) modules have gained importance in the photovoltaic (PV) market. New PV plants increasingly use TF technologies. In order to have a reliable sample of a PV module population, a huge number of modules must be measured. There is a big variety of materials used in TF technology. Some of these modules are made of amorphous or microcrystalline silicon. Other are made of CIS or CdTe. Not all these materials respond the same under standard test conditions (STC) of power measurement. Power rates of the modules may vary depending on both the extent and the history of sunlight exposure. Thus, it is necessary a testing method adapted to each TF technology. This test must guarantee repeatability of measurements of generated power. This paper shows responses of different commercial TF PV modules to sunlight exposure. Several test procedures were performed in order to find the best methodology to obtain measurements of TF PV modules at STC in the easiest way. A methodology for indoor measurements adapted to these technologies is described.
Resumo:
The definition of technical specifications and the corresponding laboratory procedures are necessary steps in order to assure the quality of the devices prior to be installed in Solar Home Systems (SHS). To clarify and unify criteria a European project supported the development of the Universal Technical Standard for Solar Home Systems (UTSfSHS). Its principles were to generate simple and affordable technical requirements to be optimized in order to facilitate the implementation of tests with basic and simple laboratory tools even on the same SHS electrification program countries. These requirements cover the main aspects of this type of installations and its lighting chapter was developed based on the most used technology at that time: fluorescent tubes and CFLs. However, with the consolidation of the new LED solid state lighting devices, particular attention is being given to this matter and new procedures are required. In this work we develop a complete set of technical specifications and test procedures that have been designed within the frame of the UTSfSHS, based on an intense review of the scientific and technical publications related to LED lighting and their practical application. They apply to lamp reliability, performance and safety under normal, extreme and abnormal operating conditions as a simple but complete quality meter tool for any LED bulb.
Resumo:
This work aims to contribute to a further understanding of the fundamentals of crystallographic slip and grain boundary sliding in the γ-TiAl Ti–45Al–2Nb–2Mn (at%)–0.8 vol%TiB2 intermetallic alloy, by means of in situ high-temperature tensile testing combined with electron backscatter diffraction (EBSD). Several microstructures, containing different fractions and sizes of lamellar colonies and equiaxed γ-grains, were fabricated by either centrifugal casting or powder metallurgy, followed by heat treatment at 1300 °C and furnace cooling. in situ tensile and tensile-creep experiments were performed in a scanning electron microscope (SEM) at temperatures ranging from 580 °C to 700 °C. EBSD was carried out in selected regions before and after straining. Our results suggest that, during constant strain rate tests, true twin γ/γ interfaces are the weakest barriers to dislocations and, thus, that the relevant length scale might be influenced by the distance between non-true twin boundaries. Under creep conditions both grain/colony boundary sliding (G/CBS) and crystallographic slip are observed to contribute to deformation. The incidence of boundary sliding is particularly high in γ grains of duplex microstructures. The slip activity during creep deformation in different microstructures was evaluated by trace analysis. Special emphasis was placed in distinguishing the compliance of different slip events with the Schmid law with respect to the applied stress.
Resumo:
Esta investigación recoge un cúmulo de intereses en torno a un modo de generar arquitectura muy específico: La producción de objetos con una forma subyacente no apriorística. Los conocimientos expuestos se apoyan en condiciones del pensamiento reciente que impulsan la ilusión por alimentar la fuente creativa de la arquitectura con otros campos del saber. Los tiempos del conocimiento animista sensible y el conocimiento objetivo de carácter científico son correlativos en la historia pero casi nunca han sido sincrónicos. Representa asimismo un intento por aunar los dos tipos de conocimiento retomando la inercia que ya se presentía a comienzos del siglo XX. Se trata por tanto, de un ensayo sobre la posible anulación de la contraposición entre estos dos mundos para pasar a una complementariedad entre ambos en una sola visión conjunta compartida. Como meta final de esta investigación se presenta el desarrollo de un sistema crítico de análisis para los objetos arquitectónicos que permita una diferenciación entre aquellos que responden a los problemas de manera completa y sincera y aquellos otros que esconden, bajo una superficie consensuada, la falta de un método resolutivo de la complejidad en el presente creativo. La Investigación observa tres grupos de conocimiento diferenciados agrupados en sus capítulos correspondientes: El primer capítulo versa sobre el Impulso Creador. En él se define la necesidad de crear un marco para el individuo creador, aquel que independientemente de las fuerzas sociales del momento presiente que existe algo más allá que está sin resolver. Denominamos aquí “creador rebelde” a un tipo de personaje reconocible a lo largo de la Historia como aquel capaz de reconocer los cambios que ese operan en su presente y que utiliza para descubrir lo nuevo y acercarse algo al origen creativo. En el momento actual ese tipo de personaje es el que intuye o ya ha intuido hace tiempo la existencia de una complejidad creciente no obviable en el pensamiento de este tiempo. El segundo capítulo desarrolla algunas Propiedades de Sistemas de actuación creativa. En él se muestra una investigación que desarrolla un marco de conocimientos científicos muy específicos de nuestro tiempo que la arquitectura, de momento, no ha absorbido ni refleja de manera directa en su manera de crear. Son temas de presencia casi ya mundana en la sociedad pero que se resisten a ser incluidos en los procesos creativos como parte de la conciencia. La mayoría de ellos hablan de precisión, órdenes invisibles, propiedades de la materia o la energía tratados de manera objetiva y apolítica. La meta final supone el acercamiento e incorporación de estos conceptos y propiedades a nuestro mundo sensible unificándolos indisociablemente bajo un solo punto de vista. El último capítulo versa sobre la Complejidad y su capacidad de reducción a lo esencial. Aquí se muestran, a modo de conclusiones, la introducción de varios conceptos para el desarrollo de un sistema crítico hacia la arquitectura de nuestro tiempo. Entre ellos, el de Complejidad Esencial, definido como aquella de carácter inevitable a la hora de responder la arquitectura a los problemas y solicitaciones crecientes a los que se enfrenta en el presente. La Tesis mantiene la importancia de informar sobre la imposibilidad en el estado actual de las cosas de responder de manera sincera con soluciones de carácter simplista y la necesidad, por tanto, de soluciones necesarias de carácter complejo. En este sentido se define asimismo el concepto de Forma Subyacente como herramienta crítica para poder evaluar la respuesta de cada arquitectura y poder tener un sistema y visión crítica sobre lo que es un objeto consistente frente a la situación a la que se enfrenta. Dicha forma subyacente se define como aquella manera de entender conjuntamente y de manera sincrónica aquello que percibimos de manera sensible inseparable de las fuerzas ocultas, creativas, tecnológicas, materiales y energéticas que sustentan la definición y entendimiento de cualquier objeto construido. ABSTRACT This research includes a cluster of interests around a specific way to generate architecture: The production of objects without an a priori underlying form. The knowledge presented is based on current conditions of thought promoting the illusion to feed the creative source of architecture with other fields of knowledge. The sensible animist knowledge and objective scientific knowledge are correlative in history but have rarely been synchronous. This research is also an attempt to combine both types of knowledge to regain the inertia already sensed in the early twentieth century. It is therefore an essay on the annulment of the opposition between these two worlds to move towards complementarities of both in a single shared vision. The ultimate goal of this research is to present the development of a critical analysis system for architectural objects that allows differentiation between those who respond to the problems sincerely and those who hide under an agreed appearance, the lack of a method for solving the complexity of the creative present. The research observes three distinct groups of knowledge contained in their respective chapters: The first chapter deals with the Creative Impulse. In it is defined the need to create a framework for the creative individual who, regardless of the current social forces, forebodes that there is something hidden beyond which is still unresolved. We define the "rebel creator" as a kind of person existing throughout history who is able to recognize the changes operating in its present and use them to discover something new and get closer to the origin of creation. At present, this type of character is the one who intuits the existence of a non obviable increasing complexity in society and thought. The second chapter presents some systems, and their properties, for creative performance. It describes the development of a framework composed of current scientific knowledge that architecture has not yet absorbed or reflected directly in her procedures. These are issues of common presence in society but are still reluctant to be included in the creative processes even if they already belong to the collective consciousness. Most of them talk about accuracy, invisible orders, properties of matter and energy, always treated from an objective and apolitical perspective. The ultimate goal pursues the approach and incorporation of these concepts and properties to the sensible world, inextricably unifying all under a single point of view. The last chapter deals with complexity and the ability to reduce it to the essentials. Here we show, as a conclusion, the introduction of several concepts to develop a critical approach to analyzing the architecture of our time. Among them, the concept of Essential Complexity, defined as one that inevitably arises when architecture responds to the increasing stresses that faces today. The thesis maintains the importance of reporting, in the present state of things, the impossibility to respond openly with simplistic solutions and, therefore, the need for solutions to complex character. In this sense, the concept of Underlying Form is defined as a critical tool to evaluate the response of each architecture and possess a critical system to clarify what is an consistent object facing a certain situation. The underlying form is then defined as a way to synchronously understand what we perceive sensitively inseparable from the hidden forces of creative, technological, material and energetic character that support the definition and understanding of any constructed object.
Resumo:
This work is an outreach approach to an ubiquitous recent problem in secondary-school education: how to face back the decreasing interest in natural sciences shown by students under ‘pressure’ of convenient resources in digital devices/applications. The approach rests on two features. First, empowering of teen-age students to understand regular natural events around, as very few educated people they meet could do. Secondly, an understanding that rests on personal capability to test and verify experimental results from the oldest science, astronomy, with simple instruments as used from antiquity down to the Renaissance (a capability restricted to just solar and lunar motions). Because lengths in astronomy and daily life are so disparate, astronomy basically involved observing and registering values of angles (along with times), measurements being of two types, of angles on the ground and of angles in space, from the ground. First, the gnomon, a simple vertical stick introduced in Babylonia and Egypt, and then in Greece, is used to understand solar motion. The gnomon shadow turns around during any given day, varying in length and thus angle between solar ray and vertical as it turns, going through a minimum (noon time, at a meridian direction) while sweeping some angular range from sunrise to sunset. Further, the shadow minimum length varies through the year, with times when shortest and sun closest to vertical, at summer solstice, and times when longest, at winter solstice six months later. The extreme directions at sunset and sunrise correspond to the solstices, swept angular range greatest at summer, over 180 degrees, and the opposite at winter, with less daytime hours; in between, spring and fall equinoxes occur, marked by collinear shadow directions at sunrise and sunset. The gnomon allows students to determine, in addition to latitude (about 40.4° North at Madrid, say), the inclination of earth equator to plane of its orbit around the sun (ecliptic), this fundamental quantity being given by half the difference between solar distances to vertical at winter and summer solstices, with value about 23.5°. Day and year periods greatly differing by about 2 ½ orders of magnitude, 1 day against 365 days, helps students to correctly visualize and interpret the experimental measurements. Since the gnomon serves to observe at night the moon shadow too, students can also determine the inclination of the lunar orbital plane, as about 5 degrees away from the ecliptic, thus explaining why eclipses are infrequent. Independently, earth taking longer between spring and fall equinoxes than from fall to spring (the solar anomaly), as again verified by the students, was explained in ancient Greek science, which posited orbits universally as circles or their combination, by introducing the eccentric circle, with earth placed some distance away from the orbital centre when considering the relative motion of the sun, which would be closer to the earth in winter. In a sense, this can be seen as hint and approximation of the elliptic orbit proposed by Kepler many centuries later.
Resumo:
Autoaggregation in bacteria is the phenomenon of aggregation between cells of the same strain, whereas coaggregation is due to aggregation occurring among different species. Aggregation ability of prebiotic bacteria is related to adhesion ability, which is a prerequisite for the colonization and protection of the gastrointestinal tract in all animal species; however, coaggregation ability of prebiotic bacteria offers a possibility of close interaction with pathogenic bacteria.