884 resultados para fine-grained quartz
Resumo:
We report evidence for a major ice stream that operated over the northwestern Canadian Shield in the Keewatin Sector of the Laurentide Ice Sheet during the last deglaciation 9000-8200 (uncalibrated) yr BP. It is reconstructed at 450 km in length, 140 km in width, and had an estimated catchment area of 190000 km. Mapping from satellite imagery reveals a suite of bedforms ('flow-set') characterized by a highly convergent onset zone, abrupt lateral margins, and where flow was presumed to have been fastest, a remarkably coherent pattern of mega-scale glacial lineations with lengths approaching 13 km and elongation ratios in excess of 40:1. Spatial variations in bedform elongation within the flow-set match the expected velocity field of a terrestrial ice stream. The flow pattern does not appear to be steered by topography and its location on the hard bedrock of the Canadian Shield is surprising. A soft sedimentary basin may have influenced ice-stream activity by lubricating the bed over the downstream crystalline bedrock, but it is unlikely that it operated over a pervasively deforming till layer. The location of the ice stream challenges the view that they only arise in deep bedrock troughs or over thick deposits of 'soft' fine-grained sediments. We speculate that fast ice flow may have been triggered when a steep ice sheet surface gradient with high driving stresses contacted a proglacial lake. An increase in velocity through calving could have propagated fast ice flow upstream (in the vicinity of the Keewatin Ice Divide) through a series of thermomechanical feedback mechanisms. It exerted a considerable impact on the Laurentide Ice Sheet, forcing the demise of one of the last major ice centres.
Resumo:
The design space of emerging heterogenous multi-core architectures with re-configurability element makes it feasible to design mixed fine-grained and coarse-grained parallel architectures. This paper presents a hierarchical composite array design which extends the curret design space of regular array design by combining a sequence of transformations. This technique is applied to derive a new design of a pipelined parallel regular array with different dataflow between phases of computation.
Resumo:
Motor imagery, passive movement, and movement observation have been suggested to activate the sensorimotor system without overt movement. The present study investigated these three covert movement modes together with overt movement in a within-subject design to allow for a fine-grained comparison of their abilities in activating the sensorimotor system, i.e. premotor, primary motor, and somatosensory cortices. For this, 21 healthy volunteers underwent functional magnetic resonance imaging (fMRI). In addition we explored the abilities of the different covert movement modes in activating the sensorimotor system in a pilot study of 5 stroke patients suffering from chronic severe hemiparesis. Results demonstrated that while all covert movement modes activated sensorimotor areas, there were profound differences between modes and between healthy volunteers and patients. In healthy volunteers, the pattern of neural activation in overt execution was best resembled by passive movement, followed by motor imagery, and lastly by movement observation. In patients, attempted overt execution was best resembled by motor imagery, followed by passive movement, and lastly by movement observation. Our results indicate that for severely hemiparetic stroke patients motor imagery may be the preferred way to activate the sensorimotor system without overt behavior. In addition, the clear differences between the covert movement modes point to the need for within-subject comparisons.
Resumo:
In this article, we present additional support of Duffield's (2003, 2005) distinction between Underlying Competence and Surface Competence. Duffield argues that a more fine-grained distinction between levels of competence and performance is warranted and necessary. While underlying competence is categorical, surface competence is more probabilistic and gradient, being sensitive to lexical and constructional contingencies, including the contextual appropriateness of a given construction. We examine a subset of results from a study comparing native and learner competence of properties at the syntax-discourse interface. Specifically, we look at the acceptability of Clitic Right Dislocation in native and L2 Spanish, in discourse-appropriate context. We argue that Duffield's distinction is a possible explanation of our results.
Resumo:
Delayed ettringite formation (DEF) is a chemical reaction with proven damaging effects on hydrated concrete. Ettringite crystals can cause cracks and their widening due to pressure on cracked walls caused by the positive volume difference in the reaction. Concrete may show improvements in strength at early ages but further growth of cracks causes widening and spreading through the concrete structure. In this study, finely dispersed crystallization nuclei achieved by adding air-entraining agent (AEA) and short vibration of specimens is presented as the main prerequisite for reducing DEF-induced deterioration of hydrated concrete. The study presents the method and mechanism for obtaining the required nucleation. Controlling long-term DEF by providing AEA-induced crystallisation nuclei, prevented excessive and rapid initial strength improvements, and resulted in a slight increase of compressive strength of fine grained concrete with only marginally lower density.
Resumo:
Construction procurement is complex and there is a very wide range of options available to procurers. Inappropriate choices about how to procure may limit practical opportunities for innovation. In particular, traditional approaches to construction procurement set up many obstacles for technology suppliers to provide innovative solutions. This is because they are often employed as sub-contractors simply to provide and install equipment to specifications developed before the point at which they become involved in a project. A research team at the University of Reading has developed a procurement framework that comprehensively defines the various options open to procurers in a more fine-grained way than has been known in the past. This enables informed decisions that can establish tailor-made procurement approaches that take into account the needs of specific clients. It enables risk and reward structures to be aligned so that contracts and payment mechanisms are aligned precisely with what a client seeks to achieve. This is not a “one-size-fits-all” approach. Rather, it is an approach that enables informed decisions about how to organize individual procurements that are appropriate to particular circumstances, acknowledging that they differ for each client and for each procurement exercise. Within this context, performance-based contracting (PBC) is explored in terms of the different ways in which technology suppliers within constructed facilities might be encouraged and rewarded for the kinds of innovation sought by the ultimate clients. Examples from various industry sectors are presented, from public sector and from private sector, with a commentary about what they sought to achieve and the extent to which they were successful. The lessons from these examples are presented in terms of feasibility in relation to financial issues, governance, economics, strategic issues, contractual issues and cash flow issues for clients and for contractors. Further background documents and more detailed readings are provided in an appendix for those who wish to find out more.
Resumo:
Deposit modelling based on archived borehole logs supplemented by a small number of dedicated boreholes is used to reconstruct the main boundary surfaces and the thickness of the main sediment units within the succession of Holocene alluvial deposits underlying the floodplain in the Barking Reach of the Lower Thames Valley. The basis of the modelling exercise is discussed and the models are used to assess the significance of floodplain relief in determining patterns of sedimentation. This evidence is combined with the results of biostratigraphical and geochronological investigations to reconstruct the environmental conditions associated with each successive stage of floodplain aggradation. The two main factors affecting the history and spatial pattern of Holocene sedimentation are shown to be the regional behaviour of relative sea level and the pattern of relief on the surface of the sub-alluvial, Late Devensian Shepperton Gravel. As is generally the case in the Lower Thames Valley, three main stratigraphic units are recognised, the Lower Alluvium, a peat bed broadly equivalent to the Tilbury III peat of Devoy (1979) and an Upper Alluvium. There is no evidence to suggest that the floodplain was substantially re-shaped by erosion during the Holocene. Instead, the relief inherited from the Shepperton Gravel surface was gradually buried either by the accumulation of peat or by deposition of fine-grained sediment from suspension in standing or slow-moving water. The palaeoenvironmental record from Barking confirms important details of the Holocene record observed elsewhere in the Lower Thames Valley, including the presence of Taxus in the valley-floor fen carr woodland between about 5000 and 4000 cal BP, and the subsequent growth of Ulmus on the peat surface.
Resumo:
This paper outlines the results of a programme of radiocarbon dating and Bayesian modelling relating to an Early Bronze Age barrow cemetery at Over, Cambridgeshire. In total, 43 dates were obtained, enabling the first high-resolution independent chronology (relating to both burial and architectural events) to be constructed for a site of this kind. The results suggest that the three main turf-mound barrows were probably constructed and used successively rather than simultaneously, that the shift from inhumation to cremation seen on the site was not a straightforward progression, and that the four main ‘types’ of cremation burial in evidence were used throughout the life of the site. Overall, variability in terms of burial practice appears to have been a key feature of the site. The paper also considers the light that the fine-grained chronology developed can shed on recent much wider discussions of memory and time within Early Bronze Age barrows
Resumo:
Despite the generally positive contribution of supply management capabilities to firm performance their respective routines require more depth of assessment. Using the resource-based view we examine four routines bundles comprising ostensive and performative aspects of supply management capability – supply management integration, coordinated sourcing, collaboration management and performance assessment. Using structural equation modelling we measure supply management capability empirically as a second-order latent variable and estimate its effect on a series of financial and operational performance measures. The routines-based approach allows us to demonstrate a different, more fine-grained approach for assessing consistent bundles of homogeneous patterns of activity across firms. The results suggest supply management capability is formed of internally consistent routine bundles, which are significantly related to financial performance, mediated by operational performance. Our results confirm an indirect effect of firm performance for ‘core’ routines forming the architecture of a supply management capability. Supply management capability primarily improves the operational performance of the business, which is subsequently translated into improved financial performance. The study is significant for practice as it offers a different view about the face-valid rationale of supply management directly influencing firm financial performance. We confound this assumption, prompting caution when placing too much importance on directly assessing supply management capability using financial performance of the business.
Resumo:
The genus Copidognathus includes one-third of the species of Halacaridae described to date. This article describes spermiogenesis, sperm cell morphology and accompanying secretions from three species of Copidognathus. Initial spermatids have electron-dense cytoplasm with scattered mitochondria, a well-developed Golgi body, and nuclei with patches of heterochromatin. The cytoplasm and nuclei of these cells undergo intense swelling. The second spermatids are large electron-translucent cells, with small mitochondria in row along the remains of the endoplasmatic reticulum. In the succeeding stage, most of the cytoplasmatic structures and mitochondria have disappeared or have undergone profound transformations. Nuclei and cells elongate and chromatin begins to condense near the nuclear envelope. An acrosomal complex appears at the tip of the nucleus. The acrosomal filament is thick and runs the entire length of the nucleus. Plasmalemmal invaginations at the cell surface give rise to tubules filled with an electron-dense material. Sperm cell maturation is completed in the ventral portion of the germinal part, near the testicular lumen. As a final step in spermiogenesis, cytoplasm of the last spermatid undergoes a moderate condensation and the cariotheca disappears. Mature sperm cells were found in a matrix of ""simple"" and ""complex"" corpuscles, the latter consisting of flattened, spindle-shaped secreted bodies. Rather than in individual sperm aggregates, spermatozoa were contained in a single droplet inside the vas deferens, on a large secretion mass, structured as rows of platelets sunk in a fine grained matrix. Each mature sperm cell is covered by a thick secreted coat. In contrast to the genera Rhombognathus and other Actinotrichida, Copidognathus displays a set of features that must be regarded as apomorphic. The absence of usual mitochondria, the presence of electro-dense tubules and secretions similar to those present in Thalassarachna and Halacarellus, and the pattern of nuclear condensation are possibly shared apomorphies with these latter genera. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
While watching TV, viewers use the remote control to turn the TV set on and off, change channel and volume, to adjust the image and audio settings, etc. Worldwide, research institutes collect information about audience measurement, which can also be used to provide personalization and recommendation services, among others. The interactive digital TV offers viewers the opportunity to interact with interactive applications associated with the broadcast program. Interactive TV infrastructure supports the capture of the user-TV interaction at fine-grained levels. In this paper we propose the capture of all the user interaction with a TV remote control-including short term and instant interactions: we argue that the corresponding captured information can be used to create content pervasively and automatically, and that this content can be used by a wide variety of services, such as audience measurement, personalization and recommendation services. The capture of fine grained data about instant and interval-based interactions also allows the underlying infrastructure to offer services at the same scale, such as annotation services and adaptative applications. We present the main modules of an infrastructure for TV-based services, along with a detailed example of a document used to record the user-remote control interaction. Our approach is evaluated by means of a proof-of-concept prototype which uses the Brazilian Digital TV System, the Ginga-NCL middleware.
Resumo:
The Early Cretaceous alkaline magmatism in the northeastern region of Paraguay (Amambay Province) is represented by stocks, plugs, dikes, and dike swarms emplaced into Carboniferous to Triassic-Jurassic sediments and Precambrian rocks. This magmatism is tectonically related to the Ponta Pora Arch, a NE-trending structural feature, and has the Cerro Sarambi and Cerro Chiriguelo carbonatite complexes as its most significant expressions. Other alkaline occurrences found in the area are the Cerro Guazu and the small bodies of Cerro Apua, Arroyo Gasory, Cerro Jhu, Cerro Tayay, and Cerro Teyu. The alkaline rocks comprise ultramafic-mafic, syenitic, and carbonatitic petrographic associations in addition to lithologies of variable composition and texture occurring as dikes; fenites are described in both carbonatite complexes. Alkali feldspar and clinopyroxene, ranging from diopside to aegirine, are the most abundant minerals, with feldspathoids (nepheline, analcime), biotite, and subordinate Ti-rich garnet; minor constituents are Fe-Ti oxides and cancrinite as the main alteration product from nepheline. Chemically, the Amambay silicate rocks are potassic to highly potassic and have miaskitic affinity, with the non-cumulate intrusive types concentrated mainly in the saturated to undersaturated areas in silica syenitic fields. Fine-grained rocks are also of syenitic affiliation or represent more mafic varieties. The carbonatitic rocks consist dominantly of calciocarbonatites. Variation diagrams plotting major and trace elements vs. SiO(2) concentration for the Cerro Sarambi rocks show positive correlations for Al(2)O(3), K(2)O, and Rb, and negative ones for TiO(2), MgO, Fe(2)O(3), CaO, P(2)O(5), and Sr, indicating that fractional crystallization played an important role in the formation of the complex. Incompatible elements normalized to primitive mantle display positive spikes for Rb, La, Pb, Sr, and Sm, and negative for Nb-Ta, P, and Ti, as these negative anomalies are considerably more pronounced in the carbonatites. Chondrite-normalized REE patterns point to the high concentration of these elements and to the strong LRE/HRE fractionation. The Amambay rocks are highly enriched in radiogenic Sr and have T(DM) model ages that vary from 1.6 to 1.1 Ga. suggesting a mantle source enriched in incompatible elements by metasomatic events in Paleo-Mesoproterozoic times. Data are consistent with the derivation of the Cerro Sarambi rocks from a parental magma of lamprophyric (minette) composition and suggest an origin by liquid immiscibility processes for the carbonatites. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Extant literature examined the benefits of relational embeddedness in facilitating collaboration between organizations, as well as the necessity of firms to balance their knowledge generation into exploration and exploitation activities. However, the effects of relational embeddedness in the specific outputs of firm-university collaborations, as well as the elements that affect the exploratory nature of such outcomes remain underexplored. By examining fine grained data of more than 4.000 collaborative research and development projects by a firm and universities, 5.000 patents, and 300.000 scientific publications, it was proposed that relational embeddedness would have a positive effect on resource commitment and on joint scientific publications, but a negative effect on joint patents and exploratory outcomes resulting of such collaborations. Additionally, it was proposed that knowledge similarity would have a negative impact in exploratory endeavors made in such projects. Although some of the propositions were not supported by the data, this study revealed that relational embeddedness increases resource commitment and the production of joint scientific publications in such partnerships. At last, this study presents interesting opportunities for future research.