973 resultados para PER method
Resumo:
Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.
Resumo:
O objetivo deste trabalho é propor uma metodologia para avaliação da situação das áreas verdes da Marinha do Brasil (MB) a ser utilizada nos futuros planos de gestão destas áreas. O procedimento metodológico consistiu na aplicação de um questionário para todas as organizações navais no Brasil no total de 55 áreas distribuídas por todo o território. Para o estudo foram selecionadas 14 áreas da MB localizadas no estado do Rio de Janeiro. A pesquisa, documental e exploratória, foi aplicada ao estudo de caso na Base de Hidrografia da Marinha em Niterói (BHMN) e a Ilha de Cabo Frio (ICBFR). As respostas ao questionário resultaram na construção de matriz com a proposição de indicadores ambientais para subsidiar um plano de gestão das áreas verdes. Dois métodos foram empregados para classificar e avaliar os indicadores propostos: o método do Carbono Social (MCS) e o método Pressão-Estado-Resposta (PER). O MCS foi modificado para desenvolver um diagnóstico inicial das áreas e posterior monitoramento do plano de gestão implantado. O método PER foi utilizado para classificar e avaliar os indicadores visando o detalhamento do plano de gestão e a proposição de recomendações para implantação do plano. O estudo permitiu concluir que a metodologia utilizada pode ser aplicada às propriedades sob administração naval da MB e também a outras regiões. Espera-se que os resultados desse trabalho possam contribuir para a melhoria da gestão dessas áreas, algumas delas já impactadas por ações antrópicas em seus entornos. Tais áreas, em muitos casos, são únicas e importantes para a manutenção da biodiversidade remanescente do Bioma Mata Atlântica.
Resumo:
This thesis describes Optimist, an optimizing compiler for the Concurrent Smalltalk language developed by the Concurrent VLSI Architecture Group. Optimist compiles Concurrent Smalltalk to the assembly language of the Message-Driven Processor (MDP). The compiler includes numerous optimization techniques such as dead code elimination, dataflow analysis, constant folding, move elimination, concurrency analysis, duplicate code merging, tail forwarding, use of register variables, as well as various MDP-specific optimizations in the code generator. The MDP presents some unique challenges and opportunities for compilation. Due to the MDP's small memory size, it is critical that the size of the generated code be as small as possible. The MDP is an inherently concurrent processor with efficient mechanisms for sending and receiving messages; the compiler takes advantage of these mechanisms. The MDP's tagged architecture allows very efficient support of object-oriented languages such as Concurrent Smalltalk. The initial goals for the MDP were to have the MDP execute about twenty instructions per method and contain 4096 words of memory. This compiler shows that these goals are too optimistic -- most methods are longer, both in terms of code size and running time. Thus, the memory size of the MDP should be increased.
Resumo:
Several intervals have been proposed to quantify the agreement of two methods intended to measure the same quantity in the situation where only one measurement per method and subject is available. The limits of agreement are probably the most well-known among these intervals, which are all based on the differences between the two measurement methods. The different meanings of the intervals are not always properly recognized in applications. However, at least for small-to-moderate sample sizes, the differences will be substantial. This is illustrated both using the width of the intervals and on probabilistic scales related to the definitions of the intervals. In particular, for small-to-moderate sample sizes, it is shown that limits of agreement and prediction intervals should not be used to make statements about the distribution of the differences between the two measurement methods or about a plausible range for all future differences. Care should therefore be taken to ensure the correct choice of the interval for the intended interpretation.
Resumo:
In this work we analyze an optimal control problem for a system of two hydroelectric power stations in cascade with reversible turbines. The objective is to optimize the profit of power production while respecting the system’s restrictions. Some of these restrictions translate into state constraints and the cost function is nonconvex. This increases the complexity of the optimal control problem. The problem is solved numerically and two different approaches are adopted. These approaches focus on global optimization techniques (Chen-Burer algorithm) and on a projection estimation refinement method (PERmethod). PERmethod is used as a technique to reduce the dimension of the problem. Results and execution time of the two procedures are compared.
Resumo:
Mode of access: Internet.
Resumo:
Includes bibliographical references (p. 56-57).
Resumo:
Proper application of sunscreen is essential as an effective public health strategy for skin cancer prevention. Insufficient application is common among sunbathers, results in decreased sun protection and may therefore lead to increased UV damage of the skin. However, no objective measure of sunscreen application thickness (SAT) is currently available for field-based use. We present a method to detect SAT on human skin for determining the amount of sunscreen applied and thus enabling comparisons to manufacturer recommendations. Using a skin swabbing method and subsequent spectrophotometric analysis, we were able to determine SAT on human skin. A swabbing method was used to derive SAT on skin (in mg sunscreen per cm2 of skin area) through the concentration–absorption relationship of sunscreen determined in laboratory experiments. Analysis differentiated SATs between 0.25 and 4 mg cm−2 and showed a small but significant decrease in concentration over time postapplication. A field study was performed, in which the heterogeneity of sunscreen application could be investigated. The proposed method is a low cost, noninvasive method for the determination of SAT on skin and it can be used as a valid tool in field- and population-based studies.
Resumo:
Road surface skid resistance has been shown to have a strong relationship to road crash risk, however, applying the current method of using investigatory levels to identify crash prone roads is problematic as they may fail in identifying risky roads outside of the norm. The proposed method analyses a complex and formerly impenetrable volume of data from roads and crashes using data mining. This method rapidly identifies roads with elevated crash-rate, potentially due to skid resistance deficit, for investigation. A hypothetical skid resistance/crash risk curve is developed for each road segment, driven by the model deployed in a novel regression tree extrapolation method. The method potentially solves the problem of missing skid resistance values which occurs during network-wide crash analysis, and allows risk assessment of the major proportion of roads without skid resistance values.
Resumo:
Collaborative contracting has emerged over the past 15 years as an innovative project delivery framework that is particularly suited to infrastructure projects. Australia leads the world in the development of project and program alliance approaches to collaborative delivery. These approaches are considered to promise superior project results. However, very little is known about the learning routines that are most widely used in support of collaborative projects in general and alliance projects in particular. The literature on absorptive capacity and dynamic capabilities indicates that such learning enhances project performance. The learning routines employed at corporate level during the operation of collaborative infrastructure projects in Australia were examined through a large survey conducted in 2013. This paper presents a descriptive summary of the preliminary findings. The survey captured the experiences of 320 practitioners of collaborative construction projects, including public and private sector clients, contractors, consultants and suppliers (three per cent of projects were located in New Zealand, but for brevity’s sake the sample is referred to as Australian). The majority of projects identified used alliances (78.6%); whilst 9% used Early Contractor Involvement (ECI) contracts and 2.7% used Early Tender Involvement contracts, which are ‘slimmer’ types of collaborative contract. The remaining 9.7% of respondents used traditional contracts that employed some collaborative elements. The majority of projects were delivered for public sector clients (86.3%), and/or clients experienced with asset procurement (89.6%). All of the projects delivered infrastructure assets; one third in the road sector, one third in the water sector, one fifth in the rail sector, and the rest spread across energy, building and mining. Learning routines were explored within three interconnected phases: knowledge exploration, transformation and exploitation. The results show that explorative and exploitative learning routines were applied to a similar extent. Transformative routines were applied to a relatively low extent. It was also found that the most highly applied routine is ‘regularly applying new knowledge to collaborative projects’; and the least popular routine was ‘staff incentives to encourage information sharing about collaborative projects’. Future research planned by the authors will examine the impact of these routines on project performance.
Resumo:
Finite element frame analysis programs targeted for design office application necessitate algorithms which can deliver reliable numerical convergence in a practical timeframe with comparable degrees of accuracy, and a highly desirable attribute is the use of a single element per member to reduce computational storage, as well as data preparation and the interpretation of the results. To this end, a higher-order finite element method including geometric non-linearity is addressed in the paper for the analysis of elastic frames for which a single element is used to model each member. The geometric non-linearity in the structure is handled using an updated Lagrangian formulation, which takes the effects of the large translations and rotations that occur at the joints into consideration by accumulating their nodal coordinates. Rigid body movements are eliminated from the local member load-displacement relationship for which the total secant stiffness is formulated for evaluating the large member deformations of an element. The influences of the axial force on the member stiffness and the changes in the member chord length are taken into account using a modified bowing function which is formulated in the total secant stiffness relationship, for which the coupling of the axial strain and flexural bowing is included. The accuracy and efficiency of the technique is verified by comparisons with a number of plane and spatial structures, whose structural response has been reported in independent studies.
Resumo:
This study extends the ‘zero scan’ method for CT imaging of polymer gel dosimeters to include multi-slice acquisitions. Multi slice CT images consisting of 24 slices of 1.2 mm thickness were acquired of an irradiated polymer gel dosimeter, and processed with the zero scan technique. The results demonstrate that zero scan based gel readout can be successfully applied to generate a three dimensional image of the irradiated gel field. Compared to the raw CT images the processed figures and cross gel profiles demonstrated reduced noise and clear visibility of the penumbral region. Moreover these improved results further highlight the suitability of this method in volumetric reconstruction with reduced CT data acquisition per slice. This work shows that 3D volumes of irradiated polymer gel dosimeters can be acquired and processed with x-ray CT.
Resumo:
Largely as a result of mass unemployment problems in many European countries, the dynamics of job creation has in recent years attracted increased interest on the part of academics as well as policy-makers. In connection to this, a large number of studies carried out in various countries have concluded that SMEs play a very large and/or growing role as job creators (Birch, 1979; Baldwin and Picot, 1995; Davidsson, 1995a; Davidsson, Lindmark and Olofsson, 1993; 1994; 1995; 1997a; 1997b; Fumagelli and Mussati, 1993; Kirchhoff and Phillips, 1988; Spilling, 1995; for further reference to studies carried out in a large number of countries see also Aiginger and Tichy, 1991; ENSR, 1994; Loveman and Sengenberger, 1991; OECD, 1987; Storey and Johnson, 1987). While most researchers agree on the importance of SMEs, there is some controversy as regards whether this is mainly a result of many small start-ups and incremental expansions, or if a small minority of high growth SMEs contribute the lion’s share of new employment. This is known as the ‘mice vs. gazelles’ or ‘flyers vs. trundlers’ debate. Storey strongly advocates the position that the small group of high growth SMEs are the ‘real’ job creators (Storey, 1994; Storey & Johnson, 1987), whereas, e.g., the Davidsson et al research in Sweden (cf. above) gives more support for the ‘mice’ hypothesis.
Resumo:
Spatially-explicit modelling of grassland classes is important to site-specific planning for improving grassland and environmental management over large areas. In this study, a climate-based grassland classification model, the Comprehensive and Sequential Classification System (CSCS) was integrated with spatially interpolated climate data to classify grassland in Gansu province, China. The study area is characterized by complex topographic features imposed by plateaus, high mountains, basins and deserts. To improve the quality of the interpolated climate data and the quality of the spatial classification over this complex topography, three linear regression methods, namely an analytic method based on multiple regression and residues (AMMRR), a modification of the AMMRR method through adding the effect of slope and aspect to the interpolation analysis (M-AMMRR) and a method which replaces the IDW approach for residue interpolation in M-AMMRR with an ordinary kriging approach (I-AMMRR), for interpolating climate variables were evaluated. The interpolation outcomes from the best interpolation method were then used in the CSCS model to classify the grassland in the study area. Climate variables interpolated included the annual cumulative temperature and annual total precipitation. The results indicated that the AMMRR and M-AMMRR methods generated acceptable climate surfaces but the best model fit and cross validation result were achieved by the I-AMMRR method. Twenty-six grassland classes were classified for the study area. The four grassland vegetation classes that covered more than half of the total study area were "cool temperate-arid temperate zonal semi-desert", "cool temperate-humid forest steppe and deciduous broad-leaved forest", "temperate-extra-arid temperate zonal desert", and "frigid per-humid rain tundra and alpine meadow". The vegetation classification map generated in this study provides spatial information on the locations and extents of the different grassland classes. This information can be used to facilitate government agencies' decision-making in land-use planning and environmental management, and for vegetation and biodiversity conservation. The information can also be used to assist land managers in the estimation of safe carrying capacities which will help to prevent overgrazing and land degradation.
Resumo:
Increasing the importance and use of infrastructures such as bridges, demands more effective structural health monitoring (SHM) systems. SHM has well addressed the damage detection issues through several methods such as modal strain energy (MSE). Many of the available MSE methods either have been validated for limited type of structures such as beams or their performance is not satisfactory. Therefore, it requires a further improvement and validation of them for different types of structures. In this study, an MSE method was mathematically improved to precisely quantify the structural damage at an early stage of formation. Initially, the MSE equation was accurately formulated considering the damaged stiffness and then it was used for derivation of a more accurate sensitivity matrix. Verification of the improved method was done through two plane structures: a steel truss bridge and a concrete frame bridge models that demonstrate the framework of a short- and medium-span of bridge samples. Two damage scenarios including single- and multiple-damage were considered to occur in each structure. Then, for each structure, both intact and damaged, modal analysis was performed using STRAND7. Effects of up to 5 per cent noise were also comprised. The simulated mode shapes and natural frequencies derived were then imported to a MATLAB code. The results indicate that the improved method converges fast and performs well in agreement with numerical assumptions with few computational cycles. In presence of some noise level, it performs quite well too. The findings of this study can be numerically extended to 2D infrastructures particularly short- and medium-span bridges to detect the damage and quantify it more accurately. The method is capable of providing a proper SHM that facilitates timely maintenance of bridges to minimise the possible loss of lives and properties.