910 resultados para Software of dinamic geometry
Using simulation to determine the sensibility of error sources for software effort estimation models
Resumo:
The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.
An empirical study of process-related attributes in segmented software cost-estimation relationships
Resumo:
Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude.
Resumo:
A combination of photoelectron spectroscopy, temperature programmed desorption and low energy electron diffraction structure determinations have been applied to study the p(2 x 2) structures of pure hydrogen and co-adsorbed hydrogen and CO on Ni {111}. In agreement with earlier work atomic hydrogen is found to adsorb on fcc and hcp sites in the pure layer with H-Ni bond lengths of 1.74Angstrom. The substrate interlayer distances, d(12) = 2.05Angstrom and d(23) = 2.06Angstrom, are expanded with respect to clean Ni {111} with buckling of 0.04Angstrom in the first layer. In the co-adsorbed phase Co occupies hcp sites and only the hydrogen atoms on fcc sites remain on the surface. d(12) is even further expanded to 2.08Angstrom with buckling in the first and second layer of 0.06 and 0.02Angstrom, respectively. The C-O, C-Ni, and H-Ni bond lengths are within the range of values also found for the pure adsorbates.
Resumo:
Low energy electron diffraction (LEED) structure determinations have been performed for the p(2 x 2) structures of pure oxygen and oxygen co-adsorbed with CO on Ni{111}. Optimisation of the non-geometric parameters led to very good agreement between experimental and theoretical IV-curves and hence to a high accuracy in the structural parameters. In agreement with earlier work atomic oxygen is found to adsorb on fee sites in both structures. In the co-adsorbed phase CO occupies atop sites. The positions of the substrate atoms are almost identical, within 0.02 Angstrom, in both structures, implying that the interaction with oxygen dominates the arrangement of Ni atoms at the surface.
Resumo:
Chemisorbed layers of lysine adsorbed on Cu{110} have been studied using X-ray photoelectron spectroscopy (XPS) and near-edge X-ray absorption fine structure (NEXAFS) spectroscopy. XPS indicates that the majority (70%) of the molecules in the saturated layer at room temperature (coverage 0.27 ML) are in their zwitterionic state with no preferential molecular orientation. After annealing to 420 K a less densely packed layer is formed (0.14 ML), which shows a strong angular dependence in the characteristic π-resonance of oxygen K edge NEXAFS and no indication of zwitterions in XPS. These experimental results are best compatible with molecules bound to the substrate through the oxygen atoms of the (deprotonated) carboxylate group and the two amino groups involving Cu atoms in three different close packed rows. This μ4 bonding arrangement with an additional bond through the !-amino group is different from geometries previously suggested for lysine on Cu{110}.
Resumo:
The Perspex Machine arose from the unification of computation with geometry. We now report significant redevelopment of both a partial C compiler that generates perspex programs and of a Graphical User Interface (GUI). The compiler is constructed with standard compiler-generator tools and produces both an explicit parse tree for C and an Abstract Syntax Tree (AST) that is better suited to code generation. The GUI uses a hash table and a simpler software architecture to achieve an order of magnitude speed up in processing and, consequently, an order of magnitude increase in the number of perspexes that can be manipulated in real time (now 6,000). Two perspex-machine simulators are provided, one using trans-floating-point arithmetic and the other using transrational arithmetic. All of the software described here is available on the world wide web. The compiler generates code in the neural model of the perspex. At each branch point it uses a jumper to return control to the main fibre. This has the effect of pruning out an exponentially increasing number of branching fibres, thereby greatly increasing the efficiency of perspex programs as measured by the number of neurons required to implement an algorithm. The jumpers are placed at unit distance from the main fibre and form a geometrical structure analogous to a myelin sheath in a biological neuron. Both the perspex jumper-sheath and the biological myelin-sheath share the computational function of preventing cross-over of signals to neurons that lie close to an axon. This is an example of convergence driven by similar geometrical and computational constraints in perspex and biological neurons.
Resumo:
Passive samplers have been predominantly used to monitor environmental conditions in single volumes. However, measurements using a calibrated passive sampler- Solid Phase Microextraction (SPME) fibre, in three houses with cold pitched roof, successfully demonstrated the potential of the SPME fibre as a device for monitoring air movement in two volumes. The roofs monitored were pitched at 15° - 30° with insulation thickness varying between 200-300 mm on the ceiling. For effective analysis, two constant sources of volatile organic compounds were diffused steadily in the house. Emission rates and air movement from the house to the roof was predicted using developed algorithms. The airflow rates which were calibrated against conventional tracer gas techniques were introduced into a HAM software package to predict the effects of air movement on other varying parameters. On average it was shown from the in situ measurements that about 20-30% of air entering the three houses left through gaps and cracks in the ceiling into the roof. Although these field measurements focus on the airflows, it is associated with energy benefits such that; if these flows are reduced then significantly energy losses would also be reduced (as modelled) consequently improving the energy efficiency of the house. Other results illustrated that condensation formation risks were dependent on the airtightness of the building envelopes including configurations of their roof constructions.
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.