957 resultados para 090604 Microelectronics and Integrated Circuits
Resumo:
Objective: To assess the indoor environment of two different types of dental practices regarding VOCs, PM2.5, and ultrafine particulate concentrations and examine the relationship between specific dental activities and contaminant levels. Method: The indoor environments of two selected dental settings (private practice and community health center) will were assessed in regards to VOCs, PM 2.5, and ultrafine particulate concentrations, as well as other indoor air quality parameters (CO2, CO, temperature, and relative humidity). The sampling duration was four working days for each dental practice. Continuous monitoring and integrated sampling methods were used and number of occupants, frequency, type, and duration of dental procedures or activities recorded. Measurements were compared to indoor air quality standards and guidelines. Results: The private practice had higher CO2, CO, and most VOC concentrations than the community health center, but the community health center had higher PM2.5 and ultrafine PM concentrations. Concentrations of p-dichlorobenzene and PM2.5 exceeded some guidelines. Outdoor concentrations greatly influenced the indoor concentration. There were no significant differences in contaminant levels between the operatory and general area. Indoor concentrations during the working period were not always consistently higher than during the nonworking period. Peaks in particulate matter concentration occurred during root canal and composite procedures.^
Resumo:
El objetivo del artículo es presentar el libro “ Agroindustria, competitividad e integración:¿ una fórmula viable para Mendoza, Argentina?" en el cual se pone énfasis en la trama de relaciones y flujos que generan los diferentes circuitos productivos de la provincia y en el impacto que provocan en la organización del espacio neoeconómico. Sobre la base de las conclusiones obtenidas, se desarrolla una propuesta sustentada en la estructuración de un modelo agroindustrial exportador como proceso localizado e integrado vertical y horizontalmente. Se elige como áreas de estudio los Oasis Centro y Sur de Mendoza, en un intento de atenuar los efectos de la excesiva concentración del Oasis Norte y la progresiva marginación de otros lugares no constituidos en oasis. Se trata de un trabajo elaborado sobre la base de la tesis doctoral “ Circuitos económicos urbanos y rurales: posibilidades de integración y diversificación" publicado por Verlag Dr. Markus Hänsel-Hohenhausen, Frankfurt, 1998.
Resumo:
A joint mesocosm experiment took place in February/March 2013 in the bay of Villefranche in France as part of the european MedSeA project. Nine mesocosms (52 m**3) were deployed over a 2 weeks period and 6 different levels of pCO2 and 3 control mesocosms (about 450 µatm), were used, in order to cover the range of pCO2 anticipated for the end of the present century. During this experiment, the potential effects of these perturbations on chemistry, planktonic community composition and dynamics including: eucaryotic and prokaryotic species composition, primary production, nutrient and carbon utilization, calcification, diazotrophic nitrogen fixation, organic matter exudation and composition, micro-layer composition and biogas production were studied by a group of about 25 scientists from 8 institutes and 6 countries. This is one of the first mesocosm experiments conducted in oligotrophic waters. A blog dedicated to this experiment can be viewed at: http://medseavillefranche2013.obs-vlfr.fr.
Resumo:
A joint mesocosm experiment took place in June/July 2012 in Corsica (bay of Calvi, Stareso station;http://www.stareso.com/) as part of the european MedSeA project. Nine mesocosms (52 m**3) were deployed over a 20 days period and 6 different levels of pCO2 and 3 control mesocosms (about 450 µatm), were used, in order to cover the range of pCO2 anticipated for the end of the present century. During this experiment, the potential effects of these perturbations on chemistry, planktonic community composition and dynamics including: eucaryotic and prokaryotic species composition, primary production, nutrient and carbon utilization, calcification, diazotrophic nitrogen fixation, organic matter exudation and composition, micro-layer composition and biogas production were studied by a group of about 25 scientists from 8 institutes and 6 countries. This is one of the first mesocosm experiments conducted in oligotrophic waters. A blog dedicated to this experiment can be viewed at: http://medseastareso2012.wordpress.com/.
Resumo:
Sediments at the southern summit of Hydrate Ridge display two distinct modes of gas hydrate occurrence. The dominant mode is associated with active venting of gas exsolved from the accretionary prism and leads to high concentrations (15%-40% of pore space) of gas hydrate in seafloor or near-surface sediments at and around the topographic summit of southern Hydrate Ridge. These near-surface gas hydrates are mainly composed of previously buried microbial methane but also contain a significant (10%-15%) component of thermogenic hydrocarbons and are overprinted with microbial methane currently being generated in shallow sediments. Focused migration pathways with high gas saturation (>65%) abutting the base of gas hydrate stability create phase equilibrium conditions that permit the flow of a gas phase through the gas hydrate stability zone. Gas seepage at the summit supports rapid growth of gas hydrates and vigorous anaerobic methane oxidation. The other mode of gas hydrate occurs in slope basins and on the saddle north of the southern summit and consists of lower average concentrations (0.5%-5%) at greater depths (30-200 meters below seafloor [mbsf]) resulting from the buildup of in situ-generated dissolved microbial methane that reaches saturation levels with respect to gas hydrate stability at 30-50 mbsf. Net rates of sulfate reduction in the slope basin and ridge saddle sites estimated from curve fitting of concentration gradients are 2-4 mmol/m**3/yr, and integrated net rates are 20-50 mmol/m**2/yr. Modeled microbial methane production rates are initially 1.5 mmol/m**3/yr in sediments just beneath the sulfate reduction zone but rapidly decrease to rates of <0.1 mmol/m**3/yr at depths >100 mbsf. Integrated net rates of methane production in sediments away from the southern summit of Hydrate Ridge are 25-80 mmol/m**2/yr. Anaerobic methane oxidation is minor or absent in cored sediments away from the summit of southern Hydrate Ridge. Ethane-enriched Structure I gas hydrate solids are buried more rapidly than ethane-depleted dissolved gas in the pore water because of advection from compaction. With subsidence beneath the gas hydrate stability zone, the ethane (mainly of low-temperature thermogenic origin) is released back to the dissolved gas-free gas phases and produces a discontinuous decrease in the C1/C2 vs. depth trend. These ethane fractionation effects may be useful to recognize and estimate levels of gas hydrate occurrence in marine sediments.
Resumo:
The Antarctic Peninsula has been identified as a region of rapid on-going climate change with impacts on the cryosphere. The knowledge of glacial changes and freshwater budgets resulting from intensified glacier melt is an important boundary condition for many biological and integrated earth system science approaches. We provide a case study on glacier and mass balance changes for the ice cap of King George Island. The area loss between 2000 and 2008 amounted to about 20 km**2 (about 1.6% of the island area) and compares to glacier retreat rates observed in previous years. Measured net accumulation rates for two years (2007 and 2008) show a strong interannual variability with maximum net accumulation rates of 4950 mm w.e./a and 3184 mm w.e./a, respectively. These net accumulation rates are at least 4 times higher than reported mean values (1926-95) from an ice core. An elevation dependent precipitation rate of 343 mm w.e./a (2007) and 432 mm w.e./a (2008) per 100 m elevation increase was observed. Despite these rather high net accumulation rates on the main ice cap, consistent surface lowering was observed at elevations below 270 m above ellipsoid over an 11-year period. These DGPS records reveal a linear dependence of surface lowering with altitude with a maximum annual surface lowering rate of 1.44 m/a at 40 m and -0.20 m/a at 270 m above ellipsoid. These results fit well to observations by other authors and surface lowering rates derived from the ICESat laser altimeter. Assuming that climate conditions of the past 11 years continue, the small ice cap of Bellingshausen Dome will disappear in about 285 years.
Resumo:
Measurements of solar radiation over and under sea ice have been performed on various stations in the Arctic Ocean during the Polarstern cruise PS92 (TRANSSIZ) between 19 May and 30 June 2015. All radiation measurements have been performed with Ramses spectral radiometers (Trios, Rastede, Germany). All data are given in full spectral resolution interpolated to 1.0 nm, and integrated over the entire wavelength range (broadband, total: 320 to 950 nm). Two sensors were mounted on a Remotely Operated Vehicle (ROV) and one radiometer was installed on the sea ice for surface reference measurements (solar irradiance). On the ROV, one irradiance sensor (cos-collector) for energy budget calculations and one radiance sensor (9° opening angle) to obtain high resolution spatial variability were installed. Along with the radiation measurements, ROV positions were obtained from acoustic USBL-positioning and all parameters of vehicle depth, distance to the ice and attitude recorded. All times are given in UTC.
Resumo:
Rural road in Lao PDR defined as connecting road from village to main road, where it will lead them to market and access to other economic and social service facilities. However, due to mostly rural people accustom with subsistence farming, connecting road seems less important for rural people as their main farming produce is for own consumption rather than markets. After the introduction and implementation of New Economic Mechanism (NEM) since 1986, many rural villages have gradually developed and integrated into market system where people have significantly changed their livelihood with a better system. This progress has significantly contributed in improving income earning of people, better living standard and reduce poverty. The paper aims to illustrate the significant of rural road as connecting road from village to markets or a market access approach of farm produces. It also demonstrates through which approach, rural farmers/people could improve their income earning, develop their farming system, living standard and reduce poverty.
Resumo:
Linear regression is a technique widely used in digital signal processing. It consists on finding the linear function that better fits a given set of samples. This paper proposes different hardware architectures for the implementation of the linear regression method on FPGAs, specially targeting area restrictive systems. It saves area at the cost of constraining the lengths of the input signal to some fixed values. We have implemented the proposed scheme in an Automatic Modulation Classifier, meeting the hard real-time constraints this kind of systems have.
Resumo:
In this paper an implementation of a Wake up Radio(WuR) with addressing capabilities based on an ultra low power FPGA for ultra low energy Wireless Sensor Networks (WSNs) is proposed. The main goal is to evaluate the utilization of very low power configurable devices to take advantage of their speed, flexibility and low power consumption instead of the traditional approaches based on ASICs or microcontrollers, for communication frame decoding and communication data control.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e. that produce at most one solution), or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic (because they cali other predicates that can produce more than one solution). Applications of such determinacy information include detecting programming errors, performing certain high-level program transformations for improving search efñciency, optimizing low level code generation and parallel execution, and estimating tighter upper bounds on the computational costs of goals and data sizes, which can be used for program debugging, resource consumption and granularity control, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efncient.