962 resultados para Testing of embedded cores
Resumo:
Actual system performance of a PV system can differ from its expected behaviour.. This is the main reason why the performance of PV systems should be monitored, analyzed and, if needed, improved on. Some of the current testing procedures relating to the electrical behaviour of PV systems are appropriated for detecting electrical performance losses, but they are not well-suited to reveal hidden defects in the modules of PV plants and BIPV, which can lead to future losses. This paper reports on the tests and procedures used to evaluate the performance of PV systems, and especially on a novel procedure for quick on-site measurements and defect recognition caused by overheating in PV modules located in operating PV installations.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
All crop models, whether site-specific or global-gridded and regardless of crop, simulate daily crop transpiration and soil evaporation during the crop life cycle, resulting in seasonal crop water use. Modelers use several methods for predicting daily potential evapotranspiration (ET), including FAO-56, Penman-Monteith, Priestley-Taylor, Hargreaves, full energy balance, and transpiration water efficiency. They use extinction equations to partition energy to soil evaporation or transpiration, depending on leaf area index. Most models simulate soil water balance and soil-root water supply for transpiration, and limit transpiration if water uptake is insufficient, and thereafter reduce dry matter production. Comparisons among multiple crop and global gridded models in the Agricultural Model Intercomparison and Improvement Project (AgMIP) show surprisingly large differences in simulated ET and crop water use for the same climatic conditions. Model intercomparisons alone are not enough to know which approaches are correct. There is an urgent need to test these models against field-observed data on ET and crop water use. It is important to test various ET modules/equations in a model platform where other aspects such as soil water balance and rooting are held constant, to avoid compensation caused by other parts of models. The CSM-CROPGRO model in DSSAT already has ET equations for Priestley-Taylor, Penman-FAO-24, Penman-Monteith-FAO-56, and an hourly energy balance approach. In this work, we added transpiration-efficiency modules to DSSAT and AgMaize models and tested the various ET equations against available data on ET, soil water balance, and season-long crop water use of soybean, fababean, maize, and other crops where runoff and deep percolation were known or zero. The different ET modules created considerable differences in predicted ET, growth, and yield.
Resumo:
PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.
Resumo:
Mapping aboveground carbon density in tropical forests can support CO2 emissionmonitoring and provide benefits for national resource management. Although LiDAR technology has been shown to be useful for assessing carbon density patterns, the accuracy and generality of calibrations of LiDAR-based aboveground carbon density (ACD) predictions with those obtained from field inventory techniques should be intensified in order to advance tropical forest carbon mapping. Here we present results from the application of a general ACD estimation model applied with small-footprint LiDAR data and field-based estimates of a 50-ha forest plot in Ecuador?s Yasuní National Park. Subplots used for calibration and validation of the general LiDAR equation were selected based on analysis of topographic position and spatial distribution of aboveground carbon stocks. The results showed that stratification of plot locations based on topography can improve the calibration and application of ACD estimation using airborne LiDAR (R2 = 0.94, RMSE = 5.81 Mg?C? ha?1, BIAS = 0.59). These results strongly suggest that a general LiDAR-based approach can be used for mapping aboveground carbon stocks in western lowland Amazonian forests.
Resumo:
This work is an outreach approach to an ubiquitous recent problem in secondary-school education: how to face back the decreasing interest in natural sciences shown by students under ‘pressure’ of convenient resources in digital devices/applications. The approach rests on two features. First, empowering of teen-age students to understand regular natural events around, as very few educated people they meet could do. Secondly, an understanding that rests on personal capability to test and verify experimental results from the oldest science, astronomy, with simple instruments as used from antiquity down to the Renaissance (a capability restricted to just solar and lunar motions). Because lengths in astronomy and daily life are so disparate, astronomy basically involved observing and registering values of angles (along with times), measurements being of two types, of angles on the ground and of angles in space, from the ground. First, the gnomon, a simple vertical stick introduced in Babylonia and Egypt, and then in Greece, is used to understand solar motion. The gnomon shadow turns around during any given day, varying in length and thus angle between solar ray and vertical as it turns, going through a minimum (noon time, at a meridian direction) while sweeping some angular range from sunrise to sunset. Further, the shadow minimum length varies through the year, with times when shortest and sun closest to vertical, at summer solstice, and times when longest, at winter solstice six months later. The extreme directions at sunset and sunrise correspond to the solstices, swept angular range greatest at summer, over 180 degrees, and the opposite at winter, with less daytime hours; in between, spring and fall equinoxes occur, marked by collinear shadow directions at sunrise and sunset. The gnomon allows students to determine, in addition to latitude (about 40.4° North at Madrid, say), the inclination of earth equator to plane of its orbit around the sun (ecliptic), this fundamental quantity being given by half the difference between solar distances to vertical at winter and summer solstices, with value about 23.5°. Day and year periods greatly differing by about 2 ½ orders of magnitude, 1 day against 365 days, helps students to correctly visualize and interpret the experimental measurements. Since the gnomon serves to observe at night the moon shadow too, students can also determine the inclination of the lunar orbital plane, as about 5 degrees away from the ecliptic, thus explaining why eclipses are infrequent. Independently, earth taking longer between spring and fall equinoxes than from fall to spring (the solar anomaly), as again verified by the students, was explained in ancient Greek science, which posited orbits universally as circles or their combination, by introducing the eccentric circle, with earth placed some distance away from the orbital centre when considering the relative motion of the sun, which would be closer to the earth in winter. In a sense, this can be seen as hint and approximation of the elliptic orbit proposed by Kepler many centuries later.
Resumo:
Los sensores de fibra óptica son una tecnología que ha madurado en los últimos años, sin embargo, se requiere un mayor desarrollo de aplicaciones para materiales naturales como las rocas, que por ser agregados complejos pueden contener partículas minerales y fracturas de tamaño mucho mayor que las galgas eléctricas usadas tradicionalmente para medir deformaciones en las pruebas de laboratorio, ocasionando que los resultados obtenidos puedan ser no representativos. En este trabajo fueron diseñados, fabricados y probados sensores de deformación de gran área y forma curvada, usando redes de Bragg en fibra óptica (FBG) con el objetivo de obtener registros representativos en rocas que contienen minerales y estructuras de diversas composiciones, tamaños y direcciones. Se presenta el proceso de elaboración del transductor, su caracterización mecánica, su calibración y su evaluación en pruebas de compresión uniaxial en muestras de roca. Para verificar la eficiencia en la transmisión de la deformación de la roca al sensor una vez pegado, también fue realizado el análisis de la transferencia incluyendo los efectos del adhesivo, de la muestra y del transductor. Los resultados experimentales indican que el sensor desarrollado permite registro y transferencia de la deformación fiables, avance necesario para uso en rocas y otros materiales heterogénos, señalando una interesante perspectiva para aplicaciones sobre superficies irregulares, pues permite aumentar a voluntad el tamaño y forma del área de registro, posibilita también obtener mayor fiabilidad de resultados en muestras de pequeño tamaño y sugiere su conveniencia en obras, en las cuales los sistemas eléctricos tradicionales tienen limitaciones. ABSTRACT Optical fiber sensors are a technology that has matured in recent years, however, further development for rock applications is needed. Rocks contain mineral particles and features larger than electrical strain gauges traditionally used in laboratory tests, causing the results to be unrepresentative. In this work were designed, manufactured, and tested large area and curved shape strain gages, using fiber Bragg gratings in optical fiber (FBG) in order to obtain representative measurement on surface rocks samples containing minerals and structures of different compositions, sizes and directions. This reports presents the processes of manufacturing, mechanical characterization, calibration and evaluation under uniaxial compression tests on rock samples. To verify the efficiency of rock deformation transmitted to attached sensor, it was also performed the analysis of the strain transfer including the effects of the bonding, the sample and the transducer. The experimental results indicate that the developed sensor enables reliable measurements of the strain and its transmission from rock to sensor, appropriate for use in heterogeneous materials, pointing an interesting perspective for applications on irregular surfaces, allowing increasing at will the size and shape of the measurement area. This research suggests suitability of the optical strain gauge for real scale, where traditional electrical systems have demonstrated some limitations.
Resumo:
To initiate homologous recombination, sequence similarity between two DNA molecules must be searched for and homology recognized. How the search for and recognition of homology occurs remains unproven. We have examined the influences of DNA topology and the polarity of RecA–single-stranded (ss)DNA filaments on the formation of synaptic complexes promoted by RecA. Using two complementary methods and various ssDNA and duplex DNA molecules as substrates, we demonstrate that topological constraints on a small circular RecA–ssDNA filament prevent it from interwinding with its duplex DNA target at the homologous region. We were unable to detect homologous pairing between a circular RecA–ssDNA filament and its relaxed or supercoiled circular duplex DNA targets. However, the formation of synaptic complexes between an invading linear RecA–ssDNA filament and covalently closed circular duplex DNAs is promoted by supercoiling of the duplex DNA. The results imply that a triplex structure formed by non-Watson–Crick hydrogen bonding is unlikely to be an intermediate in homology searching promoted by RecA. Rather, a model in which RecA-mediated homology searching requires unwinding of the duplex DNA coupled with local strand exchange is the likely mechanism. Furthermore, we show that polarity of the invading RecA–ssDNA does not affect its ability to pair and interwind with its circular target duplex DNA.
Resumo:
The acyclic nucleoside phosphonate analog 9-(2-phosphonylmethoxyethyl)adenine (PMEA) was recently found to be effective as an inhibitor of visna virus replication and cytopathic effect in sheep choroid plexus cultures. To study whether PMEA also affects visna virus infection in sheep, two groups of four lambs each were inoculated intracerebrally with 10(6.3) TCID50 of visna virus strain KV1772 and treated subcutaneously three times a week with PMEA at 10 and 25 mg/kg, respectively. The treatment was begun on the day of virus inoculation and continued for 6 weeks. A group of four lambs were infected in the same way but were not treated. The lambs were bled weekly or biweekly and the leukocytes were tested for virus. At 7 weeks after infection, the animals were sacrificed, and cerebrospinal fluid (CSF) and samples of tissue from various areas of the brain and from lungs, spleen, and lymph nodes were collected for isolation of virus and for histopathologic examination. The PMEA treatment had a striking effect on visna virus infection, which was similar for both doses of the drug. Thus, the frequency of virus isolations was much lower in PMEA-treated than in untreated lambs. The difference was particularly pronounced in the blood, CSF, and brain tissue. Furthermore, CSF cell counts were much lower and inflammatory lesions in the brain were much less severe in the treated lambs than in the untreated controls. The results indicate that PMEA inhibits the propagation and spread of visna virus in infected lambs and prevents brain lesions, at least during early infection. The drug caused no noticeable side effects during the 6 weeks of treatment.
Resumo:
This dissertation introduces an approach to generate tests to test fail-safe behavior for web applications. We apply the approach to a commercial web application. We build models for both behavioral and mitigation requirements. We create mitigation tests from an existing functional black box test suite by determining failure type and points of failure in the test suite and weaving required mitigation based on weaving rules to generate a test suite that tests proper mitigation of failures. A genetic algorithm (GA) is used to determine points of failure and type of failure that needs to be tested. Mitigation test paths are woven into the behavioral test at the point of failure based on failure specific weaving rules. A simulator was developed to evaluate choice of parameters for the genetic algorithm. We showed how to tune the fitness function and performed tuning experiments for GA to determine what values to use for exploration weight and prospecting weight. We found that higher defect densities make prospecting and mining more successful, while lower mitigation defect densities need more exploration. We compare efficiency and effectiveness of the approach. First, the GA approach is compared to random selection. The results show that the GA performance was better than random selection and that the approach was robust when the search space increased. Second, we compare the GA against four coverage criteria. The results of comparison show that test requirements generated by a genetic algorithm (GA) are more efficient than three of the four coverage criteria for large search spaces. They are equally effective. For small search spaces, the genetic algorithm is less effective than three of the four coverage criteria. The fourth coverage criteria is too weak and unable to find all defects in almost all cases. We also present a large case study of a mortgage system at one of our industrial partners and show how we formalize the approach. We evaluate the use of a GA to create test requirements. The evaluation includes choice of initial population, multiplicity of runs and a discussion of the cost of evaluating fitness. Finally, we build a selective regression testing approach based on types of changes (add, delete, or modify) that could occur in the behavioral model, the fault model, the mitigation models, the weaving rules, and the state-event matrix. We provide a systematic method by showing the formalization steps for each type of change to the various models.
Resumo:
CuO/ceria-zirconia catalysts have been prepared, deeply characterised (N2 adsorption–desorption isotherms at −196 °C, XRD, Raman spectroscopy, XPS, TEM and H2-TPR) and tested for NO oxidation to NO2 in TPR conditions, and for soot combustion at mild temperature (400 °C) in a NOx/O2 stream. The behaviour has been compared to that of a reference Pt/alumina commercial catalyst. The ceria-zirconia support was prepared by the co-precipitation method, and different amounts of copper (0.5, 1, 2, 4 and 6 wt%) were loaded by incipient wetness impregnation. The results revealed that copper is well-dispersed onto the ceria-zirconia support for the catalysts with low copper loading and CuO particles were only identified by XRD in samples with 4 and 6% of copper. A very low loading of copper increases significantly the activity for the NO oxidation to NO2 with regard to the ceria-zirconia support and an optimum was found for a 4% CuO/ceria-zirconia composition, showing a very high activity (54% at 348 °C). The soot combustion rate at 400 °C obtained with the 2% CuO/ceria-zirconia catalyst is slightly lower to that of 1% Pt/alumina in terms of mass of catalyst but higher in terms of price of catalyst.
Resumo:
Background: The Clinical Learning Environment, Supervision and Nurse Teacher scale is a reliable and valid instrument to evaluate the quality of the clinical learning process in international nursing education contexts. Objectives: This paper reports the development and psychometric testing of the Spanish version of the Clinical Learning Environment, Supervision and Nurse Teacher scale. Design: Cross-sectional validation study of the scale. Setting: 10 public and private hospitals in the Alicante area, and the Faculty of Health Sciences (University of Alicante, Spain). Participants: 370 student nurses on clinical placement (January 2011–March 2012). Methods: The Clinical Learning Environment, Supervision and Nurse Teacher scale was translated using the modified direct translation method. Statistical analyses were performed using PASW Statistics 18 and AMOS 18.0.0 software. A multivariate analysis was conducted in order to assess construct validity. Cronbach’s alpha coefficient was used to evaluate instrument reliability. Results: An exploratory factorial analysis identified the five dimensions from the original version, and explained 66.4% of the variance. Confirmatory factor analysis supported the factor structure of the Spanish version of the instrument. Cronbach’s alpha coefficient for the scale was .95, ranging from .80 to .97 for the subscales. Conclusion: This version of the Clinical Learning Environment, Supervision and Nurse Teacher scale instrument showed acceptable psychometric properties for use as an assessment scale in Spanish-speaking countries.