867 resultados para Productive chain of digital medias


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The South China Sea (SCS) is one of the most active areas of internal waves. We undertook a program of physical oceanography in the northern South China Sea from June to July of 2009, and conducted a 1-day observation from 15:40 of June 24 to 16:40 of June 25 using a chain of instruments, including temperature sensors, pressure sensors and temperature-pressure meters at a site (117.5A degrees E, 21A degrees N) northeast of the Dongsha Islands. We measured fluctuating tidal and subtidal properties with the thermistor-chain and a ship-mounted Acoustic Doppler Current Profiler, and observed a large-amplitude nonlinear internal wave passing the site followed by a number of small ones. To further investigate this phenomenon, we collected the tidal constituents from the TPXO7.1 dataset to evaluate the tidal characteristics at and around the recording site, from which we knew that the amplitude of the nonlinear internal wave was about 120 m and the period about 20 min. The horizontal and vertical velocities induced by the soliton were approximately 2 m/s and 0.5 m/s, respectively. This soliton occurred 2-3 days after a spring tide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The area of the southwestern Nansha Trough is one of the most productive areas of the southern South China Sea. It is a typical semi-deep sea area of transition from shoal to abyssal zone. To understand distributions and roles of nitrogen forms involved in biogeochemical cycling in this area, contents of nitrogen in four extractable forms: nitrogen in ion exchangeable form (IEF-N), nitrogen in weak acid extractable form (WAEF-N), nitrogen in strong alkali extractable form (SAEF-N) and nitrogen in strong oxidation extractable form (SOEF-N), as well as in total nitrogen content (TN) in surface sediments were determined from samples collected from the cruise in April-May 1999. The study area was divided into three regions (A, B and C) in terms of clay sediment (< 4 mu m) content at < 40%, 40%-60% and > 60%, respectively. Generally, region C was the richest in the nitrogen of all forms and region A the poorest, indicating that the finer the grain size is, the richer the contents of various nitrogen are. The burial efficiency of total nitrogen in surface sediments was 28.79%, indicating that more than 70% of nitrogen had been released and participated in biogeochemical recycling through sediment-water interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A natural lectin from the serum of the shrimp Litopenaeus vannamei was purified to homogeneity by a single-step affinity chromatography using fetuin-coupled agarose. The purified serum lectin (named LVL) showed a strong affinity for human A/B/O erythrocytes (RBC), mouse RBC, chicken RBC and its haemagglutinating (HA) activity was specifically dependent on Ca2+ and reversibly sensitive to EDTA. LVL inactive form had a molecular mass estimate of 172 kDa and was composed of two non-identical subunits (32 and 38 kDa) cross-linked by interchain disulphide bonds. Significant LVL activity was observed between pH 7 and 11. In HA-inhibition assays performed with several carbohydrates and glycoproteins, LVL showed a distinct and unique specificity for GalNAc/GluNAc/NeuAc which had an acetyl group, while glycoproteins fetuin and bovine submaxillary mucin (BSM) had sialic acid. Moreover, this agglutinin appeared to recognise the terminal N- and O-acetyl groups in the oligosaccharide chain of glycoconjugates. The HA activity of L. vannamei lectin was also susceptible to inhibition by lipopolysaccharides from diverse Gram-negative bacteria, which might indicate a significant in vivo role of this humoral agglutinin in the host immune response against bacterial infections. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effects of grazing intensity on leaf photosynthetic rate (Pn), specific leaf area (SLA), individual tiller density, sward leaf area index (LAI), harvested herbage DM, and species composition in grass mixtures (Clinelymus nutans + Bromus inermis, Elymus nutans + Bromus inermis + Agropyron cristatum and Elymus nutans + Clinelymus nutans + Bromus inermis + Agropyron cristatum) were studied in the alpine region of the Tibetan Plateau. Four grazing intensities (GI), expressed as feed utilisation rates (UR) by Tibetan lambs were imposed as follows: (1) no grazing; (2) 30% UR as light grazing; (3) 50% UR as medium grazing; and (4) 70% UR as high grazing. Leaf Pn rate and tiller density of grasses increased (P < 0.05), while sward LAI and harvested herbage DM declined (P < 0.05) with the increments of GI, although no effect of GI on SLA was observed. With increasing GI, Elymus nutans and Clinelymus nutans increased but Bromus inermis and Agropyron cristatum decreased in swards, LAI and DM contribution. Whether being grazed or not, Elymus nutans + Clinelymus nutans + Bromus inermis + Agropyron cristatum was the most productive sward among the grass mixtures. Thus, two well-performed grass species (Elymus nutans and Clinelymus nutans) and the most productive mixture of four species should be investigated further as the new feed resources in the alpine grazing system of the Tibetan Plateau. Light grazing intensity of 30% UR was recommended for these grass mixtures when swards, LAI, herbage DM harvested, and species compatibility were taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"The Structure and Interpretation of Computer Programs" is the entry-level subject in Computer Science at the Massachusetts Institute of Technology. It is required of all students at MIT who major in Electrical Engineering or in Computer Science, as one fourth of the "common core curriculum," which also includes two subjects on circuits and linear systems and a subject on the design of digital systems. We have been involved in the development of this subject since 1978, and we have taught this material in its present form since the fall of 1980 to approximately 600 students each year. Most of these students have had little or no prior formal training in computation, although most have played with computers a bit and a few have had extensive programming or hardware design experience. Our design of this introductory Computer Science subject reflects two major concerns. First we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute. Secondly, we believe that the essential material to be addressed by a subject at this level, is not the syntax of particular programming language constructs, nor clever algorithms for computing particular functions of efficiently, not even the mathematical analysis of algorithms and the foundations of computing, but rather the techniques used to control the intellectual complexity of large software systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tedd, L.A. & Large, A. (2005). Digital libraries: principles and practice in a global environment. Munich: K.G. Saur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This volume is devoted to the broad topic of distributed digital preservation, a still-emerging field of practice for the cultural memory arena. Replication and distribution hold out the promise of indefinite preservation of materials without degradation, but establishing effective organizational and technical processes to enable this form of digital preservation is daunting. Institutions need practical examples of how this task can be accomplished in manageable, low-cost ways."--P. [4] of cover

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new family of neural network architectures is presented. This family of architectures solves the problem of constructing and training minimal neural network classification expert systems by using switching theory. The primary insight that leads to the use of switching theory is that the problem of minimizing the number of rules and the number of IF statements (antecedents) per rule in a neural network expert system can be recast into the problem of minimizing the number of digital gates and the number of connections between digital gates in a Very Large Scale Integrated (VLSI) circuit. The rules that the neural network generates to perform a task are readily extractable from the network's weights and topology. Analysis and simulations on the Mushroom database illustrate the system's performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of ultra high speed (~20 Gsamples/s) analogue to digital converters (ADCs), and the delayed deployment of 40 Gbit/s transmission due to the economic downturn, has stimulated the investigation of digital signal processing (DSP) techniques for compensation of optical transmission impairments. In the future, DSP will offer an entire suite of tools to compensate for optical impairments and facilitate the use of advanced modulation formats. Chromatic dispersion is a very significant impairment for high speed optical transmission. This thesis investigates a novel electronic method of dispersion compensation which allows for cost-effective accurate detection of the amplitude and phase of the optical field into the radio frequency domain. The first electronic dispersion compensation (EDC) schemes accessed only the amplitude information using square law detection and achieved an increase in transmission distances. This thesis presents a method by using a frequency sensitive filter to estimate the phase of the received optical field and, in conjunction with the amplitude information, the entire field can be digitised using ADCs. This allows DSP technologies to take the next step in optical communications without requiring complex coherent detection. This is of particular of interest in metropolitan area networks. The full-field receiver investigated requires only an additional asymmetrical Mach-Zehnder interferometer and balanced photodiode to achieve a 50% increase in EDC reach compared to amplitude only detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the meaning-making practices of migrant and non-migrant children in relation to identities, race, belonging and childhood itself in their everyday lives and in the context of ‘normalizing’ discourses and spaces in Ireland. The relational, spatial and institutional contexts of children’s worlds are examined in the arenas of school, home, family, peer groups and consumer culture. The research develops a situated account of children’s complex subject positions, belongings and exclusions, as negotiated within discursive constructs, emerging in the ‘in-between’ spaces explored with other children and with adults. As a peripheral EU area both geographically and economically, Ireland has traditionally been a country of net emigration. This situation changed briefly in the late 1990s to early 2000s, sparking broad debate on Ireland’s perceived ‘new’ ethnic, cultural and linguistic diversity arising from the arrival of migrant people both from within and beyond the EU as workers and as asylum seekers, and drawing attention to issues of race, identity, equality and integration in Irish society. Based in a West of Ireland town where migrant children and children of migrants comprise very small minorities in classroom settings, this research engages with a particular demographic of children who have started primary school since these changes have occurred. It seeks to represent the complexities of the processes which constitute children’s subjectivities, and which also produce and reproduce race and childhood itself in this context. The role of local, national and global spaces, relational networks and discursive currents as they are experienced and negotiated by children are explored, and the significance of embodied, sensory and affective processes are integrated into the analysis. Notions of the functions and rhetorics of play and playfulness (Sutton-Smith 1997) form a central thread that runs throughout the thesis, where play is both a feature of children’s cultural worlds and a site of resistance or ‘thinking otherwise’. The study seeks to examine how children actively participate in (re)producing definitions of both childhood and race arising in local, national and global spaces, demonstrating that while contestations of the boundaries of childhood discourses are contingently successful, race tends to be strongly reiterated, clinging to bodies and places and compromising belonging. In addition, it explores how children access belongings through agentic and imaginative practices with regard to peer and family relationships, particularly highlighting constructions of home, while also illustrating practices of excluding children positioned as unintelligible, including the role of silences in such situations. Finally, drawing on teachers’ understandings and on children’s playful micro-level negotiations of race, the study argues that assumptions of childhood innocence contribute to justifying depoliticised discourses of race in the early primary school years, and also tend to silence children’s own dialogues with this issue. Central throughout the thesis is an emphasis on the productive potentials of children’s marginal positioning in processes of transgressing definitional boundaries, including the generation of post-race conceptualisations that revealed the borders of race as performative and fluid. It suggests that interrupting exclusionary raced identities in Irish primary schools requires engagement with children’s world-making practices and the multiple resources that inform their lives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a landmark book published in 2000, the sociologist Danièle Hervieu-Léger defined religion as a chain of memory, by which she meant that within religious communities remembered traditions are transmitted with an overpowering authority from generation to generation. After analysing Hervieu-Léger’s sociological approach as overcoming the dichotomy between substantive and functional definitions, this article compares a ritual honouring the ancestors in which a medium becomes possessed by the senior elder’s ancestor spirit among the Shona of Zimbabwe with a cleansing ritual performed by a Celtic shaman in New Hampshire, USA. In both instances, despite different social and historical contexts, appeals are made to an authoritative tradition to legitimize the rituals performed. This lends support to the claim that the authoritative transmission of a remembered tradition, by exercising an overwhelming power over communities, even if the memory of such a tradition is merely postulated, identifies the necessary and essential component for any activity to be labelled “religious”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To compare the predictive performance and potential clinical usefulness of risk calculators of the European Randomized Study of Screening for Prostate Cancer (ERSPC RC) with and without information on prostate volume. METHODS: We studied 6 cohorts (5 European and 1 US) with a total of 15,300 men, all biopsied and with pre-biopsy TRUS measurements of prostate volume. Volume was categorized into 3 categories (25, 40, and 60 cc), to reflect use of digital rectal examination (DRE) for volume assessment. Risks of prostate cancer were calculated according to a ERSPC DRE-based RC (including PSA, DRE, prior biopsy, and prostate volume) and a PSA + DRE model (including PSA, DRE, and prior biopsy). Missing data on prostate volume were completed by single imputation. Risk predictions were evaluated with respect to calibration (graphically), discrimination (AUC curve), and clinical usefulness (net benefit, graphically assessed in decision curves). RESULTS: The AUCs of the ERSPC DRE-based RC ranged from 0.61 to 0.77 and were substantially larger than the AUCs of a model based on only PSA + DRE (ranging from 0.56 to 0.72) in each of the 6 cohorts. The ERSPC DRE-based RC provided net benefit over performing a prostate biopsy on the basis of PSA and DRE outcome in five of the six cohorts. CONCLUSIONS: Identifying men at increased risk for having a biopsy detectable prostate cancer should consider multiple factors, including an estimate of prostate volume.