902 resultados para Density-based Scanning Algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of correct programs is a core problem in computer science. Although formal verification methods for establishing correctness with mathematical rigor are available, programmers often find these difficult to put into practice. One hurdle is deriving the loop invariants and proving that the code maintains them. So called correct-by-construction methods aim to alleviate this issue by integrating verification into the programming workflow. Invariant-based programming is a practical correct-by-construction method in which the programmer first establishes the invariant structure, and then incrementally extends the program in steps of adding code and proving after each addition that the code is consistent with the invariants. In this way, the program is kept internally consistent throughout its development, and the construction of the correctness arguments (proofs) becomes an integral part of the programming workflow. A characteristic of the approach is that programs are described as invariant diagrams, a graphical notation similar to the state charts familiar to programmers. Invariant-based programming is a new method that has not been evaluated in large scale studies yet. The most important prerequisite for feasibility on a larger scale is a high degree of automation. The goal of the Socos project has been to build tools to assist the construction and verification of programs using the method. This thesis describes the implementation and evaluation of a prototype tool in the context of the Socos project. The tool supports the drawing of the diagrams, automatic derivation and discharging of verification conditions, and interactive proofs. It is used to develop programs that are correct by construction. The tool consists of a diagrammatic environment connected to a verification condition generator and an existing state-of-the-art theorem prover. Its core is a semantics for translating diagrams into verification conditions, which are sent to the underlying theorem prover. We describe a concrete method for 1) deriving sufficient conditions for total correctness of an invariant diagram; 2) sending the conditions to the theorem prover for simplification; and 3) reporting the results of the simplification to the programmer in a way that is consistent with the invariantbased programming workflow and that allows errors in the program specification to be efficiently detected. The tool uses an efficient automatic proof strategy to prove as many conditions as possible automatically and lets the remaining conditions be proved interactively. The tool is based on the verification system PVS and i uses the SMT (Satisfiability Modulo Theories) solver Yices as a catch-all decision procedure. Conditions that were not discharged automatically may be proved interactively using the PVS proof assistant. The programming workflow is very similar to the process by which a mathematical theory is developed inside a computer supported theorem prover environment such as PVS. The programmer reduces a large verification problem with the aid of the tool into a set of smaller problems (lemmas), and he can substantially improve the degree of proof automation by developing specialized background theories and proof strategies to support the specification and verification of a specific class of programs. We demonstrate this workflow by describing in detail the construction of a verified sorting algorithm. Tool-supported verification often has little to no presence in computer science (CS) curricula. Furthermore, program verification is frequently introduced as an advanced and purely theoretical topic that is not connected to the workflow taught in the early and practically oriented programming courses. Our hypothesis is that verification could be introduced early in the CS education, and that verification tools could be used in the classroom to support the teaching of formal methods. A prototype of Socos has been used in a course at Åbo Akademi University targeted at first and second year undergraduate students. We evaluate the use of Socos in the course as part of a case study carried out in 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In field experiments, the density of Macrophomina phaseolina microsclerotia in root tissues of naturally colonized soybean cultivars was quantified. The density of free sclerotia on the soil was determined for plots of crop rotation (soybean-corn) and soybean monoculture soon after soybean harvest. M. phaseolina natural infection was also determined for the roots of weeds grown in the experimental area. To verify the ability of M. phaseolina to colonize dead substrates, senesced stem segments from the main plant species representing the agricultural system of southern Brazil were exposed on naturally infested soil for 30 and 60 days. To quantify the sclerotia, the methodology of Cloud and Rupe (1991) and Mengistu et al. (2007) was employed. Sclerotium density, assessed based on colony forming units (CFU), ranged from 156 to 1,108/g root tissue. Sclerotium longevity, also assessed according to CFU, was 157 days for the rotation and 163 days for the monoculture system. M. phaseolina did not colonize saprophytically any dead stem segment of Avena strigosa,Avena sativa,Hordeum vulgare,Brassica napus,Gossypium hirsutum,Secale cereale,Helianthus annus,Triticosecalerimpaui, and Triticum aestivum. Mp was isolated from infected root tissues of Amaranthus viridis,Bidens pilosa,Cardiospermum halicacabum,Euphorbia heterophylla,Ipomoea sp., and Richardia brasiliensis. The survival mechanisms of M. phaseolina studied in this paper met the microsclerotium longevity in soybean root tissues, free on the soil, as well as asymptomatic colonization of weeds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT In the present study, onion plants were tested under controlled conditions for the development of a climate model based on the influence of temperature (10, 15, 20 and 25°C) and leaf wetness duration (6, 12, 24 and 48 hours) on the severity of Botrytis leaf blight of onion caused by Botrytis squamosa. The relative lesion density was influenced by temperature and leaf wetness duration (P <0.05). The disease was most severe at 20°C. Data were subjected to nonlinear regression analysis. Beta generalized function was used to adjust severity and temperature data, while a logistic function was chosen to represent the effect of leaf wetness on the severity of Botrytis leaf blight. The response surface obtained by the product of two functions was expressed as ES = 0.008192 * (((x-5)1.01089) * ((30-x)1.19052)) * (0.33859/(1+3.77989 * exp (-0.10923*y))), where ES represents the estimated severity value (0.1); x, the temperature (°C); and y, the leaf wetness (in hours). This climate model should be validated under field conditions to verify its use as a computational system for the forecasting of Botrytis leaf blight in onion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach in tomographic instrumentation for agriculture based on Compton scattering, which allows for the simultaneous measurements of density and moisture of soil samples. Compton tomography is a technique that can be used to obtain a spatial map of electronic density of samples. Quantitative results can be obtained by using a reconstruction algorithm that takes into account the absorption of incident and scattered radiation. Results show a coefficient of linear correlation better than 0.81, when comparison is made between soil density measurements based on this method and direct transmission tomography. For soil water contents, a coefficient of linear correlation better than 0.79 was found when compared with measurements obtained by time domain reflectrometry (TDR). In addition, a set of Compton scatter images are presented to illustrate the efficacy of this imaging technique, which makes possible improved spatial variability analysis of pre-established planes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deep bedding is a swine alternative production, especially in the finishing phase, whose byproduct can be recycled, reducing the environmental impact. The objectives of this study were to characterize the ash coming from the controlled burning of the swine deep bedding (SDBA) based on rice husk, and to evaluate their performance in composites as a partial substitute for Portland cement (PC). To measure the differences between SDBA and rice husk ash (RHA) as a reference, we have characterized: particle size distribution, real specific density, x-ray diffraction, electrical conductivity, scanning electron microscopy, chemical analysis and loss on ignition. Samples were prepared for two experimental series: control, and another one with the partial replacement of 30% of SDBA in relation to the mass of the Portland cement. According to the results obtained for physical and mechanical characterization, the composites with SDBA can be used as a constructive element in the rural construction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is presented a software developed with Delphi programming language to compute the reservoir's annual regulated active storage, based on the sequent-peak algorithm. Mathematical models used for that purpose generally require extended hydrological series. Usually, the analysis of those series is performed with spreadsheets or graphical representations. Based on that, it was developed a software for calculation of reservoir active capacity. An example calculation is shown by 30-years (from 1977 to 2009) monthly mean flow historical data, from Corrente River, located at São Francisco River Basin, Brazil. As an additional tool, an interface was developed to manage water resources, helping to manipulate data and to point out information that it would be of interest to the user. Moreover, with that interface irrigation districts where water consumption is higher can be analyzed as a function of specific seasonal water demands situations. From a practical application, it is possible to conclude that the program provides the calculation originally proposed. It was designed to keep information organized and retrievable at any time, and to show simulation on seasonal water demands throughout the year, contributing with the elements of study concerning reservoir projects. This program, with its functionality, is an important tool for decision making in the water resources management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sisal fiber is an important agricultural product used in the manufacture of ropes, rugs and also as a reinforcement of polymeric or cement-based composites. However, during the fiber production process a large amount of residues is generated which currently have a low potential for commercial use. The aim of this study is to characterize the agricultural residues by the production and improvement of sisal fiber, called field bush and refugo and verify the potentiality of their use in the reinforcement of cement-based composites. The residues were treated with wet-dry cycles and evaluated using tensile testing of fibers, scanning electron microscopy (SEM) and Fourier transform infrared (FTIR) spectroscopy. Compatibility with the cement-based matrix was evaluated through the fiber pull-out test and flexural test in composites reinforced with 2 % of sisal residues. The results indicate that the use of treated residue allows the production of composites with good mechanical properties that are superior to the traditional composites reinforced with natural sisal fibers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clustering soil and crop data can be used as a basis for the definition of management zones because the data are grouped into clusters based on the similar interaction of these variables. Therefore, the objective of this study was to identify management zones using fuzzy c-means clustering analysis based on the spatial and temporal variability of soil attributes and corn yield. The study site (18 by 250-m in size) was located in Jaboticabal, São Paulo/Brazil. Corn yield was measured in one hundred 4.5 by 10-m cells along four parallel transects (25 observations per transect) over five growing seasons between 2001 and 2010. Soil chemical and physical attributes were measured. SAS procedure MIXED was used to identify which variable(s) most influenced the spatial variability of corn yield over the five study years. Basis saturation (BS) was the variable that better related to corn yield, thus, semivariograms models were fitted for BS and corn yield and then, data values were krigged. Management Zone Analyst software was used to carry out the fuzzy c-means clustering algorithm. The optimum number of management zones can change over time, as well as the degree of agreement between the BS and corn yield management zone maps. Thus, it is very important take into account the temporal variability of crop yield and soil attributes to delineate management zones accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maintenance of electric distribution network is a topical question for distribution system operators because of increasing significance of failure costs. In this dissertation the maintenance practices of the distribution system operators are analyzed and a theory for scheduling maintenance activities and reinvestment of distribution components is created. The scheduling is based on the deterioration of components and the increasing failure rates due to aging. The dynamic programming algorithm is used as a solving method to maintenance problem which is caused by the increasing failure rates of the network. The other impacts of network maintenance like environmental and regulation reasons are not included to the scope of this thesis. Further the tree trimming of the corridors and the major disturbance of the network are not included to the problem optimized in this thesis. For optimizing, four dynamic programming models are presented and the models are tested. Programming is made in VBA-language to the computer. For testing two different kinds of test networks are used. Because electric distribution system operators want to operate with bigger component groups, optimal timing for component groups is also analyzed. A maintenance software package is created to apply the presented theories in practice. An overview of the program is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ion exchange membranes are indispensable for the separation of ionic species. They can discriminate between anions and cations depending on the type of fixed ionic group present in the membrane. These conventional ion exchange membranes (CIX) have exceptional ionic conductivity, which is advantageous in various electromembrane separation processes such as electrodialysis, electrodeionisation and electrochemical ion exchange. The main disadvantage of CIX membranes is their high electrical resistance owing to the fact that the membranes are electronically non conductive. An alternative can be electroactive ion exchange membranes, which are ionically and electronically conducting. Polypyrrole (PPy) is a type of electroactive ion exchange material as well as a commonly known conducting polymer. When PPy membranes are repeatedly reduced and oxidised, ions are pumped through the membrane. The main aim of this thesis was to develop electroactive cation transport membranes based on PPy for the selective transport of divalent cations. Membranes developed composed of PPy films deposited on commercially available support materials. To carry out this study, cation exchange membranes based on PPy doped with immobile anions were prepared. Two types of dopant anions known to interact with divalent metal ions were considered, namely 4-sulphonic calix[6]arene (C6S) and carboxylated multiwalled carbon nanotubes (CNT). The transport of ions across membranes containing PPy doped with polystyrene sulphonate (PSS) and PPy doped with para-toluene sulphonate (pTS) was also studied in order to understand the nature of ion transport and permeability across PPy(CNT) and PPy(C6S) membranes. In the course of these studies, membrane characterisation was performed using electrochemical quartz crystal microbalance (EQCM) and scanning electron microscopy (SEM). Permeability of the membranes towards divalent cations was explored using a two compartment transport cell. EQCM results demonstrated that the ion exchange behaviour of polypyrrole is dependent on a number of factors including the type of dopant anion present, the type of ions present in the surrounding medium, the scan rate used during the experiment and the previous history of the polymer film. The morphology of PPy films was found to change when the dopant anion was varied and even when the thickness of the film was altered in some cases. In nearly all cases the permeability of the membranes towards metal ions followed the order K+ > Ca2+ > Mn2+. The one exception was PPy(C6S), for which the permeability followed the order Ca2+ ≥ K+ > Mn2+ > Co2+ > Cr3+. The above permeability sequences show a strong dependence on the size of the metal ions with metal ions having the smallest hydrated radii exhibiting the highest flux. Another factor that affected the permeability towards metal ions was the thickness of the PPy films. Films with the least thickness showed higher metal ion fluxes. Electrochemical control over ion transport across PPy(CNT) membrane was obtained when films composed of the latter were deposited on track-etched Nucleopore® membranes as support material. In contrast, the flux of ions across the same film was concentration gradient dependent when the polymer was deposited on polyvinylidene difluoride membranes as support material. However, electrochemical control over metal ion transport was achieved with a bilayer type of PPy film consisting of PPy(pTS)/PPy(CNT), irrespective of the type of support material. In the course of studying macroscopic charge balance during transport experiments performed using a two compartment transport cell, it was observed that PPy films were non-permselective. A clear correlation between the change in pH in the receiving solution and the ions transported across the membrane was observed. A decrease in solution pH was detected when the polymer membrane acted primarily as an anion exchanger, while an increase in pH occurred when it functioned as a cation exchanger. When there was an approximately equal flux of anions and cations across the polymer membrane, the pH in the receiving solution was in the range 6 - 8. These observations suggest that macroscopic charge balance during the transport of cations and anions across polypyrrole membranes was maintained by introduction of anions (OH-) and cations (H+) produced via electrolysis of water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this doctoral thesis, methods to estimate the expected power cycling life of power semiconductor modules based on chip temperature modeling are developed. Frequency converters operate under dynamic loads in most electric drives. The varying loads cause thermal expansion and contraction, which stresses the internal boundaries between the material layers in the power module. Eventually, the stress wears out the semiconductor modules. The wear-out cannot be detected by traditional temperature or current measurements inside the frequency converter. Therefore, it is important to develop a method to predict the end of the converter lifetime. The thesis concentrates on power-cycling-related failures of insulated gate bipolar transistors. Two types of power modules are discussed: a direct bonded copper (DBC) sandwich structure with and without a baseplate. Most common failure mechanisms are reviewed, and methods to improve the power cycling lifetime of the power modules are presented. Power cycling curves are determined for a module with a lead-free solder by accelerated power cycling tests. A lifetime model is selected and the parameters are updated based on the power cycling test results. According to the measurements, the factor of improvement in the power cycling lifetime of modern IGBT power modules is greater than 10 during the last decade. Also, it is noticed that a 10 C increase in the chip temperature cycle amplitude decreases the lifetime by 40%. A thermal model for the chip temperature estimation is developed. The model is based on power loss estimation of the chip from the output current of the frequency converter. The model is verified with a purpose-built test equipment, which allows simultaneous measurement and simulation of the chip temperature with an arbitrary load waveform. The measurement system is shown to be convenient for studying the thermal behavior of the chip. It is found that the thermal model has a 5 C accuracy in the temperature estimation. The temperature cycles that the power semiconductor chip has experienced are counted by the rainflow algorithm. The counted cycles are compared with the experimentally verified power cycling curves to estimate the life consumption based on the mission profile of the drive. The methods are validated by the lifetime estimation of a power module in a direct-driven wind turbine. The estimated lifetime of the IGBT power module in a direct-driven wind turbine is 15 000 years, if the turbine is located in south-eastern Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Communication, the flow of ideas and information between individuals in a social context, is the heart of educational experience. Constructivism and constructivist theories form the foundation for the collaborative learning processes of creating and sharing meaning in online educational contexts. The Learning and Collaboration in Technology-enhanced Contexts (LeCoTec) course comprised of 66 participants drawn from four European universities (Oulu, Turku, Ghent and Ramon Llull). These participants were split into 15 groups with the express aim of learning about computer-supported collaborative learning (CSCL). The Community of Inquiry model (social, cognitive and teaching presences) provided the content and tools for learning and researching the collaborative interactions in this environment. The sampled comments from the collaborative phase were collected and analyzed at chain-level and group-level, with the aim of identifying the various message types that sustained high learning outcomes. Furthermore, the Social Network Analysis helped to view the density of whole group interactions, as well as the popular and active members within the highly collaborating groups. It was observed that long chains occur in groups having high quality outcomes. These chains were also characterized by Social, Interactivity, Administrative and Content comment-types. In addition, high outcomes were realized from the high interactive cases and high-density groups. In low interactive groups, commenting patterned around the one or two central group members. In conclusion, future online environments should support high-order learning and develop greater metacognition and self-regulation. Moreover, such an environment, with a wide variety of problem solving tools, would enhance interactivity.