944 resultados para 020110 Stellar Astronomy and Planetary Systems
Resumo:
AIM: To assess survival rates and complications of root-filled teeth restored with or without post-and-core systems over a mean observation period of >or=4 years. METHODOLOGY: A total of 325 single- and multirooted teeth in 183 subjects treated in a private practice were root filled and restored with either a cast post-and-core or with a prefabricated titanium post and composite core. Root-filled teeth without post-retained restorations served as controls. The restored teeth served as abutments for single unit metal-ceramic or composite crowns or fixed bridges. Teeth supporting cantilever bridges, overdentures or telescopic crowns were excluded. RESULTS: Seventeen teeth in 17 subjects were lost to follow-up (17/325: 5.2%). The mean observation period was 5.2 +/- 1.8 (SD) years for restorations with titanium posts, 6.2 +/- 2.0 (SD) years for cast post-and-cores and 4.4 +/- 1.7 (SD) years for teeth without posts. Overall, 54% of build-ups included the incorporation of a titanium post and 26.5% the cementation of a cast post-and-core. The remaining 19.5% of the teeth were restored without intraradicular retention. The adjusted 5-year tooth survival rate amounted to 92.5% for teeth restored with titanium posts, to 97.1% for teeth restored with cast post-and-cores and to 94.3% for teeth without post restorations, respectively. The most frequent complications included root fracture (6.2%), recurrent caries (1.9%), post-treatment periradicular disease (1.6%) and loss of retention (1.3%). CONCLUSION: Provided that high-quality root canal treatment and restorative protocols are implemented, high survival and low complication rates of single- and multirooted root-filled teeth used as abutments for fixed restorations can be expected after a mean observation period of >or=4 years.
Resumo:
Energy efficiency has become an important research topic in intralogistics. Especially in this field the focus is placed on automated storage and retrieval systems (AS/RS) utilizing stacker cranes as these systems are widespread and consume a significant portion of the total energy demand of intralogistical systems. Numerical simulation models were developed to calculate the energy demand rather precisely for discrete single and dual command cycles. Unfortunately these simulation models are not suitable to perform fast calculations to determine a mean energy demand value of a complete storage aisle. For this purpose analytical approaches would be more convenient but until now analytical approaches only deliver results for certain configurations. In particular, for commonly used stacker cranes equipped with an intermediate circuit connection within their drive configuration there is no analytical approach available to calculate the mean energy demand. This article should address this research gap and present a calculation approach which enables planners to quickly calculate the energy demand of these systems.
Resumo:
Abelian and non-Abelian gauge theories are of central importance in many areas of physics. In condensed matter physics, AbelianU(1) lattice gauge theories arise in the description of certain quantum spin liquids. In quantum information theory, Kitaev’s toric code is a Z(2) lattice gauge theory. In particle physics, Quantum Chromodynamics (QCD), the non-Abelian SU(3) gauge theory of the strong interactions between quarks and gluons, is nonperturbatively regularized on a lattice. Quantum link models extend the concept of lattice gauge theories beyond the Wilson formulation, and are well suited for both digital and analog quantum simulation using ultracold atomic gases in optical lattices. Since quantum simulators do not suffer from the notorious sign problem, they open the door to studies of the real-time evolution of strongly coupled quantum systems, which are impossible with classical simulation methods. A plethora of interesting lattice gauge theories suggests itself for quantum simulation, which should allow us to address very challenging problems, ranging from confinement and deconfinement, or chiral symmetry breaking and its restoration at finite baryon density, to color superconductivity and the real-time evolution of heavy-ion collisions, first in simpler model gauge theories and ultimately in QCD.
Resumo:
Ore-forming and geoenviromental systems commonly involve coupled fluid flowand chemical reaction processes. The advanced numerical methods and computational modeling have become indispensable tools for simulating such processes in recent years. This enables many hitherto unsolvable geoscience problems to be addressed using numerical methods and computational modeling approaches. For example, computational modeling has been successfully used to solve ore-forming and mine site contamination/remediation problems, in which fluid flow and geochemical processes play important roles in the controlling dynamic mechanisms. The main purpose of this paper is to present a generalized overview of: (1) the various classes and models associated with fluid flow/chemically reacting systems in order to highlight possible opportunities and developments for the future; (2) some more general issues that need attention in the development of computational models and codes for simulating ore-forming and geoenviromental systems; (3) the related progresses achieved on the geochemical modeling over the past 50 years or so; (4) the general methodology for modeling of oreforming and geoenvironmental systems; and (5) the future development directions associated with modeling of ore-forming and geoenviromental systems.
Resumo:
Excavations of Neolithic (4000 – 3500 BC) and Late Bronze Age (1200 – 800 BC) wetland sites on the northern Alpine periphery have produced astonishing and detailed information about the life and human environment of prehistoric societies. It is even possible to reconstruct settlement histories and settlement dynamics, which suggest a high degree of mobility during the Neolithic. Archaeological finds—such as pottery—show local typological developments in addition to foreign influences. Furthermore, exogenous lithic forms indicate far reaching interaction. Many hundreds of bronze artefacts are recorded from the Late Bronze Age settlements, demonstrating that some wetland sites were centres of bronzework production. Exogenous forms of bronzework are relatively rare in the wetland settlements during the Late Bronze Age. However, the products produced in the lake-settlements can be found widely across central Europe, indicating their continued involvement in interregional exchange partnerships. Potential motivations and dynamics of the relationships between sites and other regions of Europe will be detailed using case studies focussing on the settlements Seedorf Lobsigensee (BE), Concise (VD), and Sutz-Lattrigen Hauptstation innen (BE), and an initial assessment of intra-site connectivity through Network Analysis of sites within the region of Lake Neuchâtel, Lake Biel, and Lake Murten.
Resumo:
Both obesity and asthma are highly prevalent, complex diseases modified by multiple factors. Genetic, developmental, lung mechanical, immunological and behavioural factors have all been suggested as playing a causal role between the two entities; however, their complex mechanistic interactions are still poorly understood and evidence of causality in children remains scant. Equally lacking is evidence of effective treatment strategies, despite the fact that imbalances at vulnerable phases in childhood can impact long-term health. This review is targeted at both clinicians frequently faced with the dilemma of how to investigate and treat the obese asthmatic child and researchers interested in the topic. Highlighting the breadth of the spectrum of factors involved, this review collates evidence regarding the investigation and treatment of asthma in obese children, particularly in comparison with current approaches in 'difficult-to-treat' childhood asthma. Finally, the authors propose hypotheses for future research from a systems-based perspective.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.
Resumo:
This paper considers ocean fisheries as complex adaptive systems and addresses the question of how human institutions might be best matched to their structure and function. Ocean ecosystems operate at multiple scales, but the management of fisheries tends to be aimed at a single species considered at a single broad scale. The paper argues that this mismatch of ecological and management scale makes it difficult to address the fine-scale aspects of ocean ecosystems, and leads to fishing rights and strategies that tend to erode the underlying structure of populations and the system itself. A successful transition to ecosystem-based management will require institutions better able to economize on the acquisition of feedback about the impact of human activities. This is likely to be achieved by multiscale institutions whose organization mirrors the spatial organization of the ecosystem and whose communications occur through a polycentric network. Better feedback will allow the exploration of fine-scale science and the employment of fine-scale fishing restraints, better adapted to the behavior of fish and habitat. The scale and scope of individual fishing rights also needs to be congruent with the spatial structure of the ecosystem. Place-based rights can be expected to create a longer private planning horizon as well as stronger incentives for the private and public acquisition of system relevant knowledge.
Resumo:
by B. Martin
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.