964 resultados para Large detector-systems performance


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate the two-year clinical performance of Class III, IV, and V composite restorations using a two-step etch-and-rinse adhesive system (2-ERA) and three one-step self-etching adhesive systems (1-SEAs).Material and Methods: Two hundred Class III, IV, and V composite restorations were placed into 50 patients. Each patient received four composite restorations (Amaris, Voco), and these restorations were bonded with one of three 1-SEAs (Futurabond M, Voco; Clearfil S3 Bond, Kuraray; and Optibond All-in-One, Kerr) or one 2-ERA (Adper Single Bond 2/3M ESPE). The four adhesive systems were evaluated at baseline and after 24 months using the following criteria: restoration retention, marginal integrity, marginal discoloration, caries occurrence, postoperative sensitivity and preservation of tooth vitality. After two years, 162 restorations were evaluated in 41 patients. Data were analyzed using the chi(2) test (p<0.05).Results: There were no statistically significant differences between the 2-ERA and the 1-SEAs regarding the evaluated parameters (p>0.05).Conclusion: The 1-SEAs showed good clinical performance at the end of 24 months.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Maize demand for food, livestock feed, and biofuel is expected to increase substantially. The Western U.S. Corn Belt accounts for 23% of U.S. maize production, and irrigated maize accounts for 43 and 58% of maize land area and total production, respectively, in this region. The most sensitive parameters (yield potential [YP], water-limited yield potential [YP-W], yield gap between actual yield and YP, and resource-use efficiency) governing performance of maize systems in the region are lacking. A simulation model was used to quantify YP under irrigated and rainfed conditions based on weather data, soil properties, and crop management at 18 locations. In a separate study, 5-year soil water data measured in central Nebraska were used to analyze soil water recharge during the non-growing season because soil water content at sowing is a critical component of water supply available for summer crops. On-farm data, including yield, irrigation, and nitrogen (N) rate for 777 field-years, was used to quantify size of yield gaps and evaluate resource-use efficiency. Simulated average YP and YP-W were 14.4 and 8.3 Mg ha-1, respectively. Geospatial variation of YP was associated with solar radiation and temperature during post-anthesis phase while variation in water-limited yield was linked to the longitudinal variation in seasonal rainfall and evaporative demand. Analysis of soil water recharge indicates that 80% of variation in soil water content at sowing can be explained by precipitation during non-growing season and residual soil water at end of previous growing season. A linear relationship between YP-W and water supply (slope: 19.3 kg ha-1 mm-1; x-intercept: 100 mm) can be used as a benchmark to diagnose and improve farmer’s water productivity (WP; kg grain per unit of water supply). Evaluation of data from farmer’s fields provides proof-of-concept and helps identify management constraints to high levels of productivity and resource-use efficiency. On average, actual yields of irrigated maize systems were 11% below YP. WP and N-fertilizer use efficiency (NUE) were high despite application of large amounts of irrigation water and N fertilizer (14 kg grain mm-1 water supply and 71 kg grain kg-1 N fertilizer). While there is limited scope for substantial increases in actual average yields, WP and NUE can be further increased by: (1) switching surface to pivot systems, (2) using conservation instead of conventional tillage systems in soybean-maize rotations, (3) implementation of irrigation schedules based on crop water requirements, and (4) better N fertilizer management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study aimed at analyzing the relationship between slow- and fast-alpha asymmetry within frontal cortex and the planning, execution and voluntary control of saccadic eye movements (SEM), and quantitative electroencephalography (qEEG) was recorded using a 20-channel EEG system in 12 healthy participants performing a fixed (i.e., memory-driven) and a random SEM (i.e., stimulus-driven) condition. We find main effects for SEM condition in slow- and fast-alpha asymmetry at electrodes F3-F4, which are located over premotor cortex, specifically a negative asymmetry between conditions. When analyzing electrodes F7-F8, which are located over prefrontal cortex, we found a main effect for condition in slow-alpha asymmetry, particularly a positive asymmetry between conditions. In conclusion, the present approach supports the association of slow- and fast-alpha bands with the planning and preparation of SEM, and the specific role of these sub-bands for both, the attention network and the coordination and integration of sensory information with a (oculo)-motor response. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we perform a thorough analysis of a spectral phase-encoded time spreading optical code division multiple access (SPECTS-OCDMA) system based on Walsh-Hadamard (W-H) codes aiming not only at finding optimal code-set selections but also at assessing its loss of security due to crosstalk. We prove that an inadequate choice of codes can make the crosstalk between active users to become large enough so as to cause the data from the user of interest to be detected by other user. The proposed algorithm for code optimization targets code sets that produce minimum bit error rate (BER) among all codes for a specific number of simultaneous users. This methodology allows us to find optimal code sets for any OCDMA system, regardless the code family used and the number of active users. This procedure is crucial for circumventing the unexpected lack of security due to crosstalk. We also show that a SPECTS-OCDMA system based on W-H 32(64) fundamentally limits the number of simultaneous users to 4(8) with no security violation due to crosstalk. More importantly, we prove that only a small fraction of the available code sets is actually immune to crosstalk with acceptable BER (<10(-9)) i.e., approximately 0.5% for W-H 32 with four simultaneous users, and about 1 x 10(-4)% for W-H 64 with eight simultaneous users.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Post-rest contraction (PRC) of cardiac muscle provides indirect information about the intracellular calcium handling. Objective: Our aim was to study the behavior of PRC, and its underlying mechanisms, in rats with myocardial infarction. Methods: Six weeks after coronary occlusion, the contractility of papillary muscles (PM) obtained from sham-operated (C, n = 17), moderate infarcted (MMI, n = 10) and large infarcted (LMI, n = 14) rats was evaluated, following rest intervals of 10 to 60 seconds before and after incubation with lithium chloride (Li+) substituting sodium chloride or ryanodine (Ry). Protein expression of SR Ca(2+)-ATPase (SERCA2), Na+/Ca2+ exchanger (NCX), phospholamban (PLB) and phospho-Ser(16)-PLB were analyzed by Western blotting. Results: MMI exhibited reduced PRC potentiation when compared to C. Opposing the normal potentiation for C, post-rest decays of force were observed in LMI muscles. In addition, Ry blocked PRC decay or potentiation observed in LMI and C; Li+ inhibited NCX and converted PRC decay to potentiation in LMI. Although MMI and LMI presented decreased SERCA2 (72 +/- 7% and 47 +/- 9% of Control, respectively) and phospho-Ser(16)-PLB (75 +/- 5% and 46 +/- 11%, respectively) protein expression, overexpression of NCX (175 +/- 20%) was only observed in LMI muscles. Conclusion: Our results showed, for the first time ever, that myocardial remodeling after MI in rats may change the regular potentiation to post-rest decay by affecting myocyte Ca(2+) handling proteins. (Arq Bras Cardiol 2012;98(3):243-251)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

[EN]Gender recognition has achieved impressive results based on the face appearance in controlled datasets. Its application in the wild and large datasets is still a challenging task for researchers. In this paper, we make use of classical techniques to analyze their performance in controlled and uncontrolled condition respectively with the LFW and MORPH datasets. For both sets the benchmarking protocol follows the 5-fold cross-validation proposed by the BEFIT challenge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the last decade advances in the field of sensor design and improved base materials have pushed the radiation hardness of the current silicon detector technology to impressive performance. It should allow operation of the tracking systems of the Large Hadron Collider (LHC) experiments at nominal luminosity (1034 cm-2s-1) for about 10 years. The current silicon detectors are unable to cope with such an environment. Silicon carbide (SiC), which has recently been recognized as potentially radiation hard, is now studied. In this work it was analyzed the effect of high energy neutron irradiation on 4H-SiC particle detectors. Schottky and junction particle detectors were irradiated with 1 MeV neutrons up to fluence of 1016 cm-2. It is well known that the degradation of the detectors with irradiation, independently of the structure used for their realization, is caused by lattice defects, like creation of point-like defect, dopant deactivation and dead layer formation and that a crucial aspect for the understanding of the defect kinetics at a microscopic level is the correct identification of the crystal defects in terms of their electrical activity. In order to clarify the defect kinetic it were carried out a thermal transient spectroscopy (DLTS and PICTS) analysis of different samples irradiated at increasing fluences. The defect evolution was correlated with the transport properties of the irradiated detector, always comparing with the un-irradiated one. The charge collection efficiency degradation of Schottky detectors induced by neutron irradiation was related to the increasing concentration of defects as function of the neutron fluence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Time-Of-Flight (TOF) detector of ALICE is designed to identify charged particles produced in Pb--Pb collisions at the LHC to address the physics of strongly-interacting matter and the Quark-Gluon Plasma (QGP). The detector is based on the Multigap Resistive Plate Chamber (MRPC) technology which guarantees the excellent performance required for a large time-of-flight array. The construction and installation of the apparatus in the experimental site have been completed and the detector is presently fully operative. All the steps which led to the construction of the TOF detector were strictly followed by a set of quality assurance procedures to enable high and uniform performance and eventually the detector has been commissioned with cosmic rays. This work aims at giving a detailed overview of the ALICE TOF detector, also focusing on the tests performed during the construction phase. The first data-taking experience and the first results obtained with cosmic rays during the commissioning phase are presented as well and allow to confirm the readiness state of the TOF detector for LHC collisions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.