950 resultados para Ancient languages


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The area in study is characterized by a regional stratigraphic hiatus from Early Miocene to Quaternary. Deposits from Late Eocene to Early Miocene occur on the bottom surface or under a thin sedimentary cover. Ferromanganese nodules, mostly of Oligocene age, formed on surface layers of Tertiary or Quaternary sediments. A detailed micropaleontological study of a block of dense ancient clay coated with a ferromanganese crust was carried out. Composition of found radiolarian and diatomaceous complexes proved that the crust formed in Quaternary on an eroded surface of Late Oligocene clay. In Quaternary Neogene sediments were eroded and washed away by bottom currents. It is likely that the erosion began 0.9-0.7 Ma at the beginning of the "Glacial Pleistocene". The erosion could be initiated by loosening and resuspension of surface sediments resulting from seismic activity generated by strong earthquakes in the Central America subduction zone. The same vibration maintained residual nodules at the seafloor surface. Thus, for the area in study a common reason and a common Quaternary interval for formation of the following features is supposed: a regional stratigraphic hiatus, formation of residual nodule fields, and position of ancient nodules on the surface of Quaternary sediments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the bed and on the ocean slope of the southern latitudinal part of the Mariana Trench ancient sediments, as well as sedimentary and igneous rocks are exposed. In the lower part of the sampled part of the studied section Late Oligocene to Early Miocene chalk-like limestones and marls occur. Upward marly tuffites and tuffs (apparently alternating with carbonate rocks) occur. These rocks are overlain by Early Miocene tuffaceous clays and siliceous-clayey muds. In the upper part of the section there are Pleistocene pelagic clays and ethmodiscus oozes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We reconstruct the aquatic ecosystem interactions since the last interglacial period in the oldest, most diverse, hydrologically connected European lake system, by using palaeolimnological diatom and selected geochemistry data from Lake Ohrid “DEEP site” core and equivalent data from Lake Prespa core, Co1215. Driven by climate forcing, the lakes experienced two adaptive cycles during the last 92 ka: "interglacial and interstadial" and "glacial" cycle. The short-term ecosystems reorganizations, e.g. regime shifts within these cycles substantially differ between the lakes, as evident from the inferred amplitudes of variation. The deeper Lake Ohrid shifted between ultra oligo- and oligotrophic regimes in contrast to the much shallower Lake Prespa, which shifted from a deeper, (oligo-) mesotrophic to a shallower, eutrophic lake and vice versa. Due to the high level of ecosystem stability (e.g. trophic state, lake level), Lake Ohrid appears relatively resistant to external forcing, such as climate and environmental change. Recovering in a relatively short time from major climate change, Lake Prespa is a resilient ecosystem. At the DEEP site, the decoupling between the lakes' response to climate change is marked in the prolonged and gradual changes during the MIS 5/4 and 2/1 transitions. These response differences and the lakes' different physical and chemical properties may limit the influence of Lake Prespa on Lake Ohrid. Regime shifts of Lake Ohrid due to potential hydrological change in Lake Prespa are not evident in the data presented here. Moreover, a complete collapse of the ecosystems functionality and loss of their diatom communities did not happen in either lake for the period presented in the study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sedimentary sequences in ancient or long-lived lakes can reach several thousands of meters in thickness and often provide an unrivalled perspective of the lake's regional climatic, environmental, and biological history. Over the last few years, deep-drilling projects in ancient lakes became increasingly multi- and interdisciplinary, as, among others, seismological, sedimentological, biogeochemical, climatic, environmental, paleontological, and evolutionary information can be obtained from sediment cores. However, these multi- and interdisciplinary projects pose several challenges. The scientists involved typically approach problems from different scientific perspectives and backgrounds, and setting up the program requires clear communication and the alignment of interests. One of the most challenging tasks, besides the actual drilling operation, is to link diverse datasets with varying resolution, data quality, and age uncertainties to answer interdisciplinary questions synthetically and coherently. These problems are especially relevant when secondary data, i.e., datasets obtained independently of the drilling operation, are incorporated in analyses. Nonetheless, the inclusion of secondary information, such as isotopic data from fossils found in outcrops or genetic data from extant species, may help to achieve synthetic answers. Recent technological and methodological advances in paleolimnology are likely to increase the possibilities of integrating secondary information. Some of the new approaches have started to revolutionize scientific drilling in ancient lakes, but at the same time, they also add a new layer of complexity to the generation and analysis of sediment-core data. The enhanced opportunities presented by new scientific approaches to study the paleolimnological history of these lakes, therefore, come at the expense of higher logistic, communication, and analytical efforts. Here we review types of data that can be obtained in ancient lake drilling projects and the analytical approaches that can be applied to empirically and statistically link diverse datasets to create an integrative perspective on geological and biological data. In doing so, we highlight strengths and potential weaknesses of new methods and analyses, and provide recommendations for future interdisciplinary deep-drilling projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Around ten years ago investigation of technical and material construction in Ancient Roma has advanced in favour to obtain positive results. This process has been directed to obtaining some dates based in chemical composition, also action and reaction of materials against meteorological assaults or post depositional displacements. Plenty of these dates should be interpreted as a result of deterioration and damage in concrete material made in one landscape with some kind of meteorological characteristics. Concrete mixture like calcium and gypsum mortars should be analysed in laboratory test programs, and not only with descriptions based in reference books of Strabo, Pliny the Elder or Vitruvius. Roman manufacture was determined by weather condition, landscape, natural resources and of course, economic situation of the owner. In any case we must research the work in every facts of construction. On the one hand, thanks to chemical techniques like X-ray diffraction and Optical microscopy, we could know the granular disposition of mixture. On the other hand if we develop physical and mechanical techniques like compressive strength, capillary absorption on contact or water behaviour, we could know the reactions in binder and aggregates against weather effects. However we must be capable of interpret these results. Last year many analyses developed in archaeological sites in Spain has contributed to obtain different point of view, so has provide new dates to manage one method to continue the investigation of roman mortars. If we developed chemical and physical analysis in roman mortars at the same time, and we are capable to interpret the construction and the resources used, we achieve to understand the process of construction, the date and also the way of restoration in future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studying independence of goals has proven very useful in the context of logic programming. In particular, it has provided a formal basis for powerful automatic parallelization tools, since independence ensures that two goals may be evaluated in parallel while preserving correctness and eciency. We extend the concept of independence to constraint logic programs (CLP) and prove that it also ensures the correctness and eciency of the parallel evaluation of independent goals. Independence for CLP languages is more complex than for logic programming as search space preservation is necessary but no longer sucient for ensuring correctness and eciency. Two additional issues arise. The rst is that the cost of constraint solving may depend upon the order constraints are encountered. The second is the need to handle dynamic scheduling. We clarify these issues by proposing various types of search independence and constraint solver independence, and show how they can be combined to allow dierent optimizations, from parallelism to intelligent backtracking. Sucient conditions for independence which can be evaluated \a priori" at run-time are also proposed. Our study also yields new insights into independence in logic programming languages. In particular, we show that search space preservation is not only a sucient but also a necessary condition for ensuring correctness and eciency of parallel execution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of developing mechanisms for easily implementing modular extensions to modular (logic) languages. By(language) extensions we refer to different groups of syntactic definitions and translation rules that extend a language. Our use of the concept of modularity in this context is twofold. We would like these extensions to be modular, in the sense above, i.e., we should be able to develop different extensions mostly separately. At the same time, the sources and targets for the extensions are modular languages, i.e., such extensions may take as input sepárate pieces of code and also produce sepárate pieces of code. Dealing with this double requirement involves interesting challenges to ensure that modularity is not broken: first, combinations of extensions (as if they were a single extensión) must be given a precise meaning. Also, the sepárate translation of múltiple sources (as if they were a single source) must be feasible. We present a detailed description of a code expansion-based framework that proposes novel solutions for these problems. We argüe that the approach, while implemented for Ciao, can be adapted for other Prolog-based systems and languages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we study, through a concrete case, the feasibility of using a high-level, general-purpose logic language in the design and implementation of applications targeting wearable computers. The case study is a "sound spatializer" which, given real-time signáis for monaural audio and heading, generates stereo sound which appears to come from a position in space. The use of advanced compile-time transformations and optimizations made it possible to execute code written in a clear style without efñciency or architectural concerns on the target device, while meeting strict existing time and memory constraints. The final executable compares favorably with a similar implementation written in C. We believe that this case is representative of a wider class of common pervasive computing applications, and that the techniques we show here can be put to good use in a range of scenarios. This points to the possibility of applying high-level languages, with their associated flexibility, conciseness, ability to be automatically parallelized, sophisticated compile-time tools for analysis and verification, etc., to the embedded systems field without paying an unnecessary performance penalty.