986 resultados para Notion of code


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Java Enterprise Applications (JEAs) are complex systems composed using various technologies that in turn rely on languages other than Java, such as XML or SQL. Given the complexity of these applications, the need to reverse engineer them in order to support further development becomes critical. In this paper we show how it is possible to split a system into layers and how is possible to interpret the distance between application elements in order to support the refactoring of JEAs. The purpose of this paper is to explore ways to provide suggestions about the refactoring operations to perform on the code by evaluating the distance between layers and elements belonging those layers. We split JEAs into layers by considering the kinds and the purposes of the elements composing the application. We measure distance between elements by using the notion of the shortest path in a graph. Also we present how to enrich the interpretation of the distance value with enterprise pattern detection in order to refine the suggestion about modifications to perform on the code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of the present thesis was to investigate the production of code-switched utterances in bilinguals’ speech production. This study investigates the availability of grammatical-category information during bilingual language processing. The specific aim is to examine the processes involved in the production of Persian-English bilingual compound verbs (BCVs). A bilingual compound verb is formed when the nominal constituent of a compound verb is replaced by an item from the other language. In the present cases of BCVs the nominal constituents are replaced by a verb from the other language. The main question addressed is how a lexical element corresponding to a verb node can be placed in a slot that corresponds to a noun lemma. This study also investigates how the production of BCVs might be captured within a model of BCVs and how such a model may be integrated within incremental network models of speech production. In the present study, both naturalistic and experimental data were used to investigate the processes involved in the production of BCVs. In the first part of the present study, I collected 2298 minutes of a popular Iranian TV program and found 962 code-switched utterances. In 83 (8%) of the switched cases, insertions occurred within the Persian compound verb structure, hence, resulting in BCVs. As to the second part of my work, a picture-word interference experiment was conducted. This study addressed whether in the case of the production of Persian-English BCVs, English verbs compete with the corresponding Persian compound verbs as a whole, or whether English verbs compete with the nominal constituents of Persian compound verbs only. Persian-English bilinguals named pictures depicting actions in 4 conditions in Persian (L1). In condition 1, participants named pictures of action using the whole Persian compound verb in the context of its English equivalent distractor verb. In condition 2, only the nominal constituent was produced in the presence of the light verb of the target Persian compound verb and in the context of a semantically closely related English distractor verb. In condition 3, the whole Persian compound verb was produced in the context of a semantically unrelated English distractor verb. In condition 4, only the nominal constituent was produced in the presence of the light verb of the target Persian compound verb and in the context of a semantically unrelated English distractor verb. The main effect of linguistic unit was significant by participants and items. Naming latencies were longer in the nominal linguistic unit compared to the compound verb (CV) linguistic unit. That is, participants were slower to produce the nominal constituent of compound verbs in the context of a semantically closely related English distractor verb compared to producing the whole compound verbs in the context of a semantically closely related English distractor verb. The three-way interaction between version of the experiment (CV and nominal versions), linguistic unit (nominal and CV linguistic units), and relation (semantically related and unrelated distractor words) was significant by participants. In both versions, naming latencies were longer in the semantically related nominal linguistic unit compared to the response latencies in the semantically related CV linguistic unit. In both versions, naming latencies were longer in the semantically related nominal linguistic unit compared to response latencies in the semantically unrelated nominal linguistic unit. Both the analysis of the naturalistic data and the results of the experiment revealed that in the case of the production of the nominal constituent of BCVs, a verb from the other language may compete with a noun from the base language, suggesting that grammatical category does not necessarily provide a constraint on lexical access during the production of the nominal constituent of BCVs. There was a minimal context in condition 2 (the nominal linguistic unit) in which the nominal constituent was produced in the presence of its corresponding light verb. The results suggest that generating words within a context may not guarantee that the effect of grammatical class becomes available. A model is proposed in order to characterize the processes involved in the production of BCVs. Implications for models of bilingual language production are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oxygen and carbon isotope ratios in benthic foraminifers have been determined at 10 cm intervals through the top 59 m of DSDP Hole 552A. This provides a glacial record of remarkable resolution for the late Pliocene and Pleistocene. The major glacial event which marked the onset of Pleistocene-like glacial-interglacial alternations was at about 2.4 m.y. ago. These very high-resolution data do not support the notion of significant Northern Hemisphere glaciation between 3.2 and 2.4 m.y. ago.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present iron (Fe) concentration and Fe isotope data for a sediment core transect across the Peru upwelling area, which hosts one of the ocean's most pronounced oxygen minimum zones (OMZs). The lateral progression of total Fe to aluminum ratios (FeT/Al) across the continental margin indicates that sediments within the OMZ are depleted in Fe whereas sediments below the OMZ are enriched in Fe relative to the lithogenic background. Rates of Fe loss within the OMZ, as inferred from FeT/Al ratios and sedimentation rates, are in agreement with benthic flux data that were calculated from pore water concentration gradients. The mass of Fe lost from sediments within the OMZ is within the same order of magnitude as the mass of Fe accumulating below the OMZ. Taken together, our data are in agreement with a shuttle scenario where Fe is reductively remobilized from sediments within the OMZ, laterally transported within the anoxic water column and re-precipitated within the more oxic water below the OMZ. Sediments within the OMZ have increased 56Fe/54Fe isotope ratios relative to the lithogenic background, which is consistent with the general notion of benthic release of dissolved Fe with a relatively low 56Fe/54Fe isotope ratio. The Fe isotope ratios increase across the margin and the highest values coincide with the greatest Fe enrichment in sediments below the OMZ. The apparent mismatch in isotope composition between the Fe that is released within the OMZ and Fe that is re-precipitated below the OMZ implies that only a fraction of the sediment-derived Fe is retained near-shore whereas another fraction is transported further offshore. We suggest that a similar open-marine shuttle is likely to operate along many ocean margins. The observed sedimentary fingerprint of the open-marine Fe shuttle differs from a related transport mechanism in isolated euxinic basins (e.g., the Black Sea) where the laterally supplied, reactive Fe is quantitatively captured within the basin sediments. We suggest that our findings are useful to identify OMZ-type Fe cycling in the geological record.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report on a detailed study of the application and effectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We first propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of different analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain defined. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual benefits obtained from the information inferred. Such benefits are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data flow analysis plays an important role in achieving efficient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also offer significant insight into the characteristics of the domains, the demands of the application, and the trade-offs involved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certifícate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixedpoint) abstraction to the code consumer is that its validity is checked in a single pass of an abstract interpretation-based checker. A main challenge is to reduce the size of certificates as much as possible while at the same time not increasing checking time. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certifícate in a single pass. Based on this notion, we instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker. We also provide a correct checking algorithm together with sufficient conditions for ensuring its completeness. The experimental results within the CiaoPP system show that our proposal is able to greatly reduce the size of certificates in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstraction-Carrying Code (ACC) is a framework for mobile code safety in which the code supplier provides a program together with an abstraction (or abstract model of the program) whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certificate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixed-point) abstraction to the code consumer is that its validity is checked in a single pass (i.e., one iteration) of an abstract interpretation-based checker. A main challenge to make ACC useful in practice is to reduce the size of certificates as much as possible, while at the same time not increasing checking time. Intuitively, we only include in the certificate the information which the checker is unable to reproduce without iterating. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certificate in a single pass. Based on this notion, we show how to instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction (or abstract model of the program) whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certifícate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixed-point) abstraction to the code consumer is that its validity is checked in a single pass (i.e., one iteration) of an abstract interpretation-based checker. A main challenge to make ACC useful in practice is to reduce the size of certificates as much as possible while at the same time not increasing checking time. The intuitive idea is to only include in the certifícate information that the checker is unable to reproduce without iterating. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certifícate in a single pass. Based on this notion, we instrument a generic analysis algorithm with the necessary extensions in order to identify information which can be reconstructed by the single-pass checker. Finally, we study what the effects of reduced certificates are on the correctness and completeness of the checking process. We provide a correct checking algorithm together with sufficient conditions for ensuring its completeness. Our ideas are illustrated through a running example, implemented in the context of constraint logic programs, which shows that our approach improves state-of-the-art techniques for reducing the size of certificates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction (or abstract model of the program) whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certificate and its generation is carried out automatically by a fixpoint analyzer. The advantage of providing a (fixpoint) abstraction to the code consumer is that its validity is checked in a single pass (i.e., one iteration) of an abstract interpretation-based checker. A main challenge to make ACC useful in practice is to reduce the size of certificates as much as possible while at the same time not increasing checking time. The intuitive idea is to only include in the certificate information that the checker is unable to reproduce without iterating. We introduce the notion of reduced certificate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the fall certificate in a single pass. Based on this notion, we instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker. Interestingly, the fact that the reduced certificate omits (parts of) the abstraction has implications in the design of the checker. We provide the sufficient conditions which allow us to ensure that 1) if the checker succeeds in validating the certificate, then the certificate is valid for the program (correctness) and 2) the checker will succeed for any reduced certificate which is valid (completeness). Our approach has been implemented and benchmarked within the CiaoPP system. The experimental results show t h a t our proposal is able to greatly reduce the size of certificates in practice. To appear in Theory and Practice of Logic Programming (TPLP).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Some verification and validation techniques have been evaluated both theoretically and empirically. Most empirical studies have been conducted without subjects, passing over any effect testers have when they apply the techniques. We have run an experiment with students to evaluate the effectiveness of three verification and validation techniques (equivalence partitioning, branch testing and code reading by stepwise abstraction). We have studied how well able the techniques are to reveal defects in three programs. We have replicated the experiment eight times at different sites. Our results show that equivalence partitioning and branch testing are equally effective and better than code reading by stepwise abstraction. The effectiveness of code reading by stepwise abstraction varies significantly from program to program. Finally, we have identified project contextual variables that should be considered when applying any verification and validation technique or to choose one particular technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern erosion of the Himalaya, the world's largest mountain range, transfers huge dissolved and particulate loads to the ocean. It plays an important role in the long-term global carbon cycle, mostly through enhanced organic carbon burial in the Bengal Fan. To understand the role of past Himalayan erosion, the influence of changing climate and tectonic on erosion must be determined. Here we use a 12 Myr sedimentary record from the distal Bengal Fan (Deep Sea Drilling Project Site 218) to reconstruct the Mio-Pliocene history of Himalayan erosion. We use carbon stable isotopes (d13C) of bulk organic matter as paleo-environmental proxy and stratigraphic tool. Multi-isotopic - Sr, Nd and Os - data are used as proxies for the source of the sediments deposited in the Bengal Fan over time. d13C values of bulk organic matter shift dramatically towards less depleted values, revealing the widespread Late Miocene (ca. 7.4 Ma) expansion of C4 plants in the basin. Sr, Nd and Os isotopic compositions indicate a rather stable erosion pattern in the Himalaya range during the past 12 Myr. This supports the existence of a strong connection between the southern Tibetan plateau and the Bengal Fan. The tectonic evolution of the Himalaya range and Southern Tibet seems to have been unable to produce large re-organisation of the drainage system. Moreover, our data do not suggest a rapid change of the altitude of the southern Tibetan plateau during the past 12 Myr. Variations in Sr and Nd isotopic compositions around the late Miocene expansion of C4 plants are suggestive of a relative increase in the erosion of High Himalaya Crystalline rock (i.e. a simultaneous reduction of both Transhimalayan batholiths and Lesser Himalaya relative contributions). This could be related to an increase in aridity as suggested by the ecological and sedimentological changes at that time. A reversed trend in Sr and Nd isotopic compositions is observed at the Plio-Pleistocene transition that is likely related to higher precipitation and the development of glaciers in the Himalaya. These almost synchronous moderate changes in erosion pattern and climate changes during the late Miocene and at the Plio-Pleistocene transition support the notion of a dominant control of climate on Himalayan erosion during this time period. However, stable erosion regime during the Pleistocene is suggestive of a limited influence of the glacier development on Himalayan erosion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We generalize the classical notion of Vapnik–Chernovenkis (VC) dimension to ordinal VC-dimension, in the context of logical learning paradigms. Logical learning paradigms encompass the numerical learning paradigms commonly studied in Inductive Inference. A logical learning paradigm is defined as a set W of structures over some vocabulary, and a set D of first-order formulas that represent data. The sets of models of ϕ in W, where ϕ varies over D, generate a natural topology W over W. We show that if D is closed under boolean operators, then the notion of ordinal VC-dimension offers a perfect characterization for the problem of predicting the truth of the members of D in a member of W, with an ordinal bound on the number of mistakes. This shows that the notion of VC-dimension has a natural interpretation in Inductive Inference, when cast into a logical setting. We also study the relationships between predictive complexity, selective complexity—a variation on predictive complexity—and mind change complexity. The assumptions that D is closed under boolean operators and that W is compact often play a crucial role to establish connections between these concepts. We then consider a computable setting with effective versions of the complexity measures, and show that the equivalence between ordinal VC-dimension and predictive complexity fails. More precisely, we prove that the effective ordinal VC-dimension of a paradigm can be defined when all other effective notions of complexity are undefined. On a better note, when W is compact, all effective notions of complexity are defined, though they are not related as in the noncomputable version of the framework.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The issue of ‘rigour vs. relevance’ in IS research has generated an intense, heated debate for over a decade. It is possible to identify, however, only a limited number of contributions on how to increase the relevance of IS research without compromising its rigour. Based on a lifecycle view of IS research, we propose the notion of ‘reality checks’ in order to review IS research outcomes in the light of actual industry demands. We assume that five barriers impact the efficient transfer of IS research outcomes; they are lack of awareness, lack of understandability, lack of relevance, lack of timeliness, and lack of applicability. In seeking to understand the effect of these barriers on the transfer of mature IS research into practice, we used focus groups. We chose DeLone and McLean’s IS success model as our stimulus because it is one of the more widely researched areas of IS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The notion of designing with change constitutes a fundamental and foundational theoretical premise for much of what constitutes landscape architecture, notably through engagement with ecology, particularly since the work of Ian McHarg in the 1960s and his key text Design with Nature. However, while most if not all texts in landscape architecture would cite this engagement of change theoretically, few go any further than citation, and when they do their methods seem fixated on utilising empirical, quantitative scientific tools for doing so, rather than the tools of design, in an architectural sense, as implied by the name of the discipline, landscape architecture.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using examples from contemporary policy and business discourses, and exemplary historical texts dealing with the notion of value, I put forward an argument as to why a critical scholarship that draws on media history, language analysis, philosophy and political economy is necessary to understand the dynamics of what is being called 'the global knowledge economy'. I argue that the social changes associated with new modes of value determination are closely associated with new media forms.