954 resultados para Eigenfunctions and fundamental solution
Resumo:
A prerequisite for preventive measures is to diagnose erosive tooth wear and to evaluate the different etiological factors in order to identify persons at risk. No diagnostic device is available for the assessment of erosive defects. Thus, they can only be detected clinically. Consequently, erosion not diagnosed in the early stage may render timely preventive measures difficult. In order to assess the risk factors, patient should record their dietary intake for a distinct period of time. Then a dentist can determine the erosive potential of the diet. Particularly, patients with more than four dietary acid intakes have a higher risk for erosion when other risk factors (such as holding the drink in the mouth) are present. Regurgitation of gastric acids (reflux, vomiting, alcohol abuse, etc.) is a further important risk factor for the development of erosion which has to be taken into account. Based on these analyses, an individually tailored preventive program may be suggested to the patients. It may comprise dietary advice, optimization of fluoride regimes, stimulation of salivary flow rate, use of buffering medicaments and particular motivation for nondestructive toothbrushing habits with a low abrasive toothpaste. The frequent use of fluoride gel and fluoride solution in addition to fluoride toothpaste offers the opportunity to reduce somewhat abrasion of tooth substance. It is also advisable to avoid abrasive tooth cleaning and whitening products, since they may remove the pellicle and may render teeth more susceptible to erosion. Since erosion, attrition and abrasion often occur simultaneously all causative components must be taken into consideration when planning preventive strategies.
Resumo:
BACKGROUND: Painful invasive procedures are frequently performed on preterm infants admitted to a neonatal intensive care unit (NICU). The aim of the present study was to investigate current pain management in Austrian, German and Swiss NICU and to identify factors associated with improved pain management in preterm infants. METHODS: A questionnaire was sent to all Austrian, German and Swiss pediatric hospitals with an NICU (n = 370). Pain assessment and documentation, use of analgesics for 13 painful procedures, presence of written guidelines for pain management and the use of 12 analgesics and sedatives were examined. RESULTS: A total of 225 units responded (61%). Pain assessment and documentation and frequent analgesic therapy for painful procedures were performed more often in units using written guidelines for pain management and in those treating >50 preterm infants at <32 weeks of gestation per year. This was also the case for the use of opioid analgesics and sucrose solution. Non-opioid analgesics were used more often in smaller units and in units with written guidelines. There was a broad variation in dosage of analgesics and sedatives within all groups. CONCLUSION: Pain assessment, documentation of pain and analgesic therapy are more frequently performed in NICU with written guidelines for pain management and in larger units with more than 50 preterm infants at <32 weeks of gestation per year.
Resumo:
Water resource depletion and sanitation are growing problems around the world. A solution to both of these problems is the use of composting latrines, as it requires no water and has been recommended by the World Health Organization as an improved sanitation technology. However, little analysis has been done on the decomposition process occurring inside the latrine, including what temperatures are reached and what variables most affect the composting process. Having better knowledge of how outside variables affect composting latrines can aid development workers on the choice of implementing such technology, and to better educate the users on the appropriate methods of maintenance. This report presents a full, detailed construction manual and temperature data analysis of a double vault composting latrine. During the author’s two year Peace Corps service in rural Paraguay he was involved with building twenty one composting latrines, and took detailed temperature readings and visual observations of his personal latrine for ten months. The author also took limited temperature readings of fourteen community member’s latrines over a three month period. These data points were analyzed to find correlations between compost temperatures and several variables. The two main variables found to affect the compost temperatures were the seasonal trends of the outside temperatures, and the mixing and addition of moisture to the compost. Outside seasonal temperature changes were compared to those of the compost and a linear regression was performed resulting in a R2-value of 0.89. Mixing the compost and adding water, or a water/urine mixture, resulted in temperature increases of the compost 100% of the time, with seasonal temperatures determining the rate and duration of the temperature increases. The temperature readings were also used to find events when certain temperatures were held for sufficient amounts of time to reach total pathogen destruction in the compost. Four different events were recorded when a temperature of 122°F (50°C) was held for at least 24 hours, ensuring total pathogen destruction in that area of the compost. One event of 114.8°F (46°C) held for one week was also recorded, again ensuring total pathogen destruction. Through the analysis of the temperature data, however, it was found that the compost only reached total pathogen destruction levels during ten percent of the data points. Because of this the storage time recommendation outlined by the World Health Organization should be complied with. The WHO recommends storing compost for 1.5-2 years in climates with ambient temperatures of 2-20°C (35-68°F), and for at least 1 year with ambient temperatures of 20-35°C (68-95°F). If these storage durations are obtainable the use of the double vault composting latrine is an economical and achievable solution to sanitation while conserving water resources.
DESIGN AND IMPLEMENT DYNAMIC PROGRAMMING BASED DISCRETE POWER LEVEL SMART HOME SCHEDULING USING FPGA
Resumo:
With the development and capabilities of the Smart Home system, people today are entering an era in which household appliances are no longer just controlled by people, but also operated by a Smart System. This results in a more efficient, convenient, comfortable, and environmentally friendly living environment. A critical part of the Smart Home system is Home Automation, which means that there is a Micro-Controller Unit (MCU) to control all the household appliances and schedule their operating times. This reduces electricity bills by shifting amounts of power consumption from the on-peak hour consumption to the off-peak hour consumption, in terms of different “hour price”. In this paper, we propose an algorithm for scheduling multi-user power consumption and implement it on an FPGA board, using it as the MCU. This algorithm for discrete power level tasks scheduling is based on dynamic programming, which could find a scheduling solution close to the optimal one. We chose FPGA as our system’s controller because FPGA has low complexity, parallel processing capability, a large amount of I/O interface for further development and is programmable on both software and hardware. In conclusion, it costs little time running on FPGA board and the solution obtained is good enough for the consumers.
Resumo:
Telecommunications have developed at an incredible speed over the last couple of decades. The decreasing size of our phones and the increasing number of ways in which we can communicate are barely the only result of this (r)evolutionary development. The latter has indeed multiple implications. The change of paradigm for telecommunications regulation, epitomised by the processes of liberalisation and reregulation, was not sufficient to answer all regulatory questions pertinent to communications. Today, after the transition from monopoly to competition, we are faced perhaps with an even harder regulatory puzzle, since we must figure out how to regulate a sector that is as dynamic and as unpredictable as electronic communications have proven to be, and as vital and fundamental to the economy and to society at large. The present book addresses the regulatory puzzle of contemporary electronic communications and suggests the outlines of a coherent model for their regulation. The search for such a model involves essentially deliberations on the question "Can competition law do it all?", since generic competition rules are largely seen as the appropriate regulatory tool for the communications domain. The latter perception has been the gist of the 2002 reform of the European Community (EC) telecommunications regime, which envisages a withdrawal of sectoral regulation, as communications markets become effectively competitive and ultimately bestows the regulation of the sector upon competition law only. The book argues that the question of whether competition law is the appropriate tool needs to be examined not in the conventional contexts of sector specific rules versus competition rules or deregulation versus regulation but in a broader governance context. Consequently, the reader is provided with an insight into the workings and specific characteristics of the communications sector as network-bound, converging, dynamic and endowed with a special societal role and function. A thorough evaluation of the regulatory objectives in the communications environment contributes further to the comprehensive picture of the communications industry. Upon this carefully prepared basis, the book analyses the communications regulatory toolkit. It explores the interplay between sectoral communications regulation, competition rules (in particular Article 82 of the EC Treaty) and the rules of the World Trade Organization (WTO) relevant to telecommunications services. The in-depth analysis of multilevel construct of EC communications law is up-to-date and takes into account important recent developments in the EC competition law in practice, in particular in the field of refusal to supply and tying, of the reform of the EC electronic communications framework and new decisions of the WTO dispute settlement body, such as notably the Mexico-Telecommunications Services Panel Report. Upon these building elements, an assessment of the regulatory potential of the EC competition rules is made. The conclusions drawn are beyond the scope of the current situation of EC electronic communications and the applicable law and explore the possible contours of an optimal regulatory framework for modern communications. The book is of particular interest to communications and antitrust law experts, as well as policy makers, government agencies, consultancies and think-tanks active in the field. Experts on other network industries (such as electricity or postal communications) can also profit from the substantial experience gathered in the communications sector as the most advanced one in terms of liberalisation and reregulation.
Resumo:
This paper is a report about the FuXML project carried out at the FernUniversität Hagen. FuXML is a Learning Content Management System (LCMS) aimed at providing a practical and efficient solution for the issues attributed to authoring, maintenance, production and distribution of online and offline distance learning material. The paper presents the environment for which the system was conceived and describes the technical realisation. We discuss the reasons for specific implementation decisions and also address the integration of the system within the organisational and technical infrastructure of the university.
Resumo:
Over the last couple of decades, the UK experienced a substantial increase in the incidence and geographical spread of bovine tuberculosis (TB), in particular since the epidemic of foot-and-mouth disease (FMD) in 2001. The initiation of the Randomized Badger Culling Trial (RBCT) in 1998 in south-west England provided an opportunity for an in-depth collection of questionnaire data (covering farming practices, herd management and husbandry, trading and wildlife activity) from herds having experienced a TB breakdown between 1998 and early 2006 and randomly selected control herds, both within and outside the RBCT (the so-called TB99 and CCS2005 case-control studies). The data collated were split into four separate and comparable substudies related to either the pre-FMD or post-FMD period, which are brought together and discussed here for the first time. The findings suggest that the risk factors associated with TB breakdowns may have changed. Higher Mycobacterium bovis prevalence in badgers following the FMD epidemic may have contributed to the identification of the presence of badgers on a farm as a prominent TB risk factor only post-FMD. The strong emergence of contact/trading TB risk factors post-FMD suggests that the purchasing and movement of cattle, which took place to restock FMD-affected areas after 2001, may have exacerbated the TB problem. Post-FMD analyses also highlighted the potential impact of environmental factors on TB risk. Although no unique and universal solution exists to reduce the transmission of TB to and among British cattle, there is an evidence to suggest that applying the broad principles of biosecurity on farms reduces the risk of infection. However, with trading remaining as an important route of local and long-distance TB transmission, improvements in the detection of infected animals during pre- and post-movement testing should further reduce the geographical spread of the disease.
Resumo:
Telecommunications have developed at an incredible speed over the last couple of decades. The decreasing size of our phones and the increasing number of ways in which we can communicate are barely the only result of this (r)evolutionary development. The latter has indeed multiple implications. The change of paradigm for telecommunications regulation, epitomised by the processes of liberalisation and reregulation, was not sufficient to answer all regulatory questions pertinent to communications. Today, after the transition from monopoly to competition, we are faced perhaps with an even harder regulatory puzzle, since we must figure out how to regulate a sector that is as dynamic and as unpredictable as electronic communications have proven to be, and as vital and fundamental to the economy and to society at large. The present book addresses the regulatory puzzle of contemporary electronic communications and suggests the outlines of a coherent model for their regulation. The search for such a model involves essentially deliberations on the question "Can competition law do it all?", since generic competition rules are largely seen as the appropriate regulatory tool for the communications domain. The latter perception has been the gist of the 2002 reform of the European Community (EC) telecommunications regime, which envisages a withdrawal of sectoral regulation, as communications markets become effectively competitive and ultimately bestows the regulation of the sector upon competition law only. The book argues that the question of whether competition law is the appropriate tool needs to be examined not in the conventional contexts of sector specific rules versus competition rules or deregulation versus regulation but in a broader governance context. Consequently, the reader is provided with an insight into the workings and specific characteristics of the communications sector as network-bound, converging, dynamic and endowed with a special societal role and function. A thorough evaluation of the regulatory objectives in the communications environment contributes further to the comprehensive picture of the communications industry. Upon this carefully prepared basis, the book analyses the communications regulatory toolkit. It explores the interplay between sectoral communications regulation, competition rules (in particular Article 82 of the EC Treaty) and the rules of the World Trade Organization (WTO) relevant to telecommunications services. The in-depth analysis of multilevel construct of EC communications law is up-to-date and takes into account important recent developments in the EC competition law in practice, in particular in the field of refusal to supply and tying, of the reform of the EC electronic communications framework and new decisions of the WTO dispute settlement body, such as notably the Mexico-Telecommunications Services Panel Report. Upon these building elements, an assessment of the regulatory potential of the EC competition rules is made. The conclusions drawn are beyond the scope of the current situation of EC electronic communications and the applicable law and explore the possible contours of an optimal regulatory framework for modern communications. The book is of particular interest to communications and antitrust law experts, as well as policy makers, government agencies, consultancies and think-tanks active in the field. Experts on other network industries (such as electricity or postal communications) can also profit from the substantial experience gathered in the communications sector as the most advanced one in terms of liberalisation and reregulation.
Resumo:
Due to their outstanding resolution and well-constrained chronologies, Greenland ice-core records provide a master record of past climatic changes throughout the Last Interglacial–Glacial cycle in the North Atlantic region. As part of the INTIMATE (INTegration of Ice-core, MArine and TErrestrial records) project, protocols have been proposed to ensure consistent and robust correlation between different records of past climate. A key element of these protocols has been the formal definition and ordinal numbering of the sequence of Greenland Stadials (GS) and Greenland Interstadials (GI) within the most recent glacial period. The GS and GI periods are the Greenland expressions of the characteristic Dansgaard–Oeschger events that represent cold and warm phases of the North Atlantic region, respectively. We present here a more detailed and extended GS/GI template for the whole of the Last Glacial period. It is based on a synchronization of the NGRIP, GRIP, and GISP2 ice-core records that allows the parallel analysis of all three records on a common time scale. The boundaries of the GS and GI periods are defined based on a combination of stable-oxygen isotope ratios of the ice (δ18O, reflecting mainly local temperature) and calcium ion concentrations (reflecting mainly atmospheric dust loading) measured in the ice. The data not only resolve the well-known sequence of Dansgaard–Oeschger events that were first defined and numbered in the ice-core records more than two decades ago, but also better resolve a number of short-lived climatic oscillations, some defined here for the first time. Using this revised scheme, we propose a consistent approach for discriminating and naming all the significant abrupt climatic events of the Last Glacial period that are represented in the Greenland ice records. The final product constitutes an extended and better resolved Greenland stratotype sequence, against which other proxy records can be compared and correlated. It also provides a more secure basis for investigating the dynamics and fundamental causes of these climatic perturbations.
Resumo:
The S0 → S1 vibronic spectrum and S1 state nonradiative relaxation of jet-cooled keto-amino 5-fluorocytosine (5FCyt) are investigated by two-color resonant two-photon ionization spectroscopy at 0.3 and 0.05 cm–1 resolution. The 000 rotational band contour is polarized in-plane, implying that the electronic transition is 1ππ*. The electronic transition dipole moment orientation and the changes of rotational constants agree closely with the SCS-CC2 calculated values for the 1ππ* (S1) transition of 5FCyt. The spectral region from 0 to 300 cm–1 is dominated by overtone and combination bands of the out-of-plane ν1′ (boat), ν2′ (butterfly), and ν3′ (HN–C6H twist) vibrations, implying that the pyrimidinone frame is distorted out-of-plane by the 1ππ* excitation, in agreement with SCS-CC2 calculations. The number of vibronic bands rises strongly around +350 cm–1; this is attributed to the 1ππ* state barrier to planarity that corresponds to the central maximum of the double-minimum out-of-plane vibrational potentials along the ν1′, ν2′, and ν3′ coordinates, which gives rise to a high density of vibronic excitations. At +1200 cm–1, rapid nonradiative relaxation (knr ≥ 1012 s–1) sets in, which we interpret as the height of the 1ππ* state barrier in front of the lowest S1/S0 conical intersection. This barrier in 5FCyt is 3 times higher than that in cytosine. The lifetimes of the ν′ = 0, 2ν1′, 2ν2′, 2ν1′ + 2ν2′, 4ν2′, and 2ν1′ + 4ν2′ levels are determined from Lorentzian widths fitted to the rotational band contours and are τ ≥ 75 ps for ν′ = 0, decreasing to τ ≥ 55 ps at the 2ν1′ + 4ν2′ level at +234 cm–1. These gas-phase lifetimes are twice those of S1 state cytosine and 10–100 times those of the other canonical nucleobases in the gas phase. On the other hand, the 5FCyt gas-phase lifetime is close to the 73 ps lifetime in room-temperature solvents. This lack of dependence on temperature and on the surrounding medium implies that the 5FCyt nonradiative relaxation from its S1 (1ππ*) state is essentially controlled by the same ∼1200 cm–1 barrier and conical intersection both in the gas phase and in solution.
Resumo:
The paper will focus on basic ways of communication between the actors in the house and home and their direct social environments (esp. neighbourhood) during the early modern period. Such ways of communication were established in and through work relations, sociability, social control and certain liminal rites. So far underestimated, the neighbourhood was both helpful and inevitable to keep house and household running. A typical aspect of the practice of communication was the importance of repetitive performative events in everyday life. In order to establish and maintain social relations, the honour of the ‘house’ as such and fundamental roles like housefather and housemother had to be performed under the eyes of neighbours and other actors. Thus, empirical evidence reveals the house and home as a specific kind of stage. In contrast to the outdated concept of ‘das ganze Haus’ (the whole house) by Otto Brunner and also a reduced socioeconomic understanding of household, the openness of the house proves to be a highly relevant feature of early modern society. This openness refers to accessibility, visibility and control. The paper will explain the proposed concept and analyse concrete examples from work and wedding.
Resumo:
A natural smoky quartz crystal from Shandong province, China, was characterised by laser ablation ICP-MS, electron probe microanalysis (EPMA) and solution ICP-MS to determine the concentration of twenty-four trace and ultra trace elements. Our main focus was on Ti quantification because of the increased use of this element for titanium in- quartz (TitaniQ) thermobarometry. Pieces of a uniform growth zone of 9 mm thickness within the quartz crystal were analysed in four different LA-ICP-MS laboratories, three EPMA laboratories and one solution-ICP-MS laboratory. The results reveal reproducible concentrations of Ti (57 ± 4 lg g-1),Al (154 ± 15 lg g-1), Li (30 ± 2 lg g-1), Fe (2.2 ± 0.3 lg g-1), Mn (0.34 ± 0.04 lg g-1), Ge (1.7 ± 0.2 lg g-1) and Ga (0.020 ± 0.002 lg g-1) and detectable, but less reproducible, concentrations of Be, B, Na, Cu, Zr, Sn and Pb. oncentrations of K, Ca, Sr, Mo, Ag, Sb, Ba and Au were below the limits of detection of all three techniques. The uncertainties on the average concentration determinations by multiple techniques and laboratories for Ti, Al, Li, Fe, Mn, Ga and Ge are low; hence, this quartz can serve as a reference material or a secondary reference material for microanalytical applications involving the quantification of trace elements in quartz.
Resumo:
Según Leo Strauss, la filosofía política no es más que el intento por responder una pregunta fundamental: la pregunta por el mejor régimen de gobierno que haga posible la “vida buena". Esta pregunta admite dos direcciones distintas como respuesta, y éstas se identifican con lo que el Autor denomina solución clásica y solución moderna al problema de la filosofía política. Sin embargo, sólo la primera puede considerarse una respuesta apropiada al problema, aun cuando debe enfrentar algunas dificultades y contradicciones que en este artículo se analizan en detalle.
Resumo:
The Asian monsoon system governs seasonality and fundamental environmental characteristics in the study area from which two distinct peculiarities are most notable: upwelling and convective mixing in the Arabian Sea and low surface salinity and stratification in the Bay of Bengal due to high riverine input and monsoonal precipitation. The respective oceanography sets the framework for nutrient availability and productivity. Upwelling ensures high nitrate concentration with temporal/spatial Si limitation; freshwater-induced stratification leads to reduced nitrogen input from the subsurface but Si enrichment in surface waters. Ultimately, both environments support high abundance of diatoms, which play a central role in the export of organic matter. It is speculated that, additional to eddy pumping, nitrogen fixation is a source of N in stratified waters and contributes to the low-d15N signal in sinking particles formed under riverine impact. Organic carbon fluxes are best correlated to opal but not to carbonate, which is explained by low foraminiferal carbonate fluxes within the river-impacted systems. This observation points to the necessity of differentiating between carbonate sources for carbon flux modeling. As evident from a compilation of previously published and new data on labile organic matter composition (amino acids and carbohydrates), organic matter fluxes are mainly driven by direct input from marine production, except the site off Pakistan where sedimentary input of (marine) organic matter is dominant during the NE monsoon. The explanation of apparently different organic carbon export efficiency calls for further investigations of, for example, food web structure and water column processes.
Resumo:
The selection of metrics for ecosystem restoration programs is critical for improving the quality of monitoring programs and characterizing project success. Moreover it is oftentimes very difficult to balance the importance of multiple ecological, social, and economical metrics. Metric selection process is a complex and must simultaneously take into account monitoring data, environmental models, socio-economic considerations, and stakeholder interests. We propose multicriteria decision analysis (MCDA) methods, broadly defined, for the selection of optimal sets of metrics to enhance evaluation of ecosystem restoration alternatives. Two MCDA methods, a multiattribute utility analysis (MAUT), and a probabilistic multicriteria acceptability analysis (ProMAA), are applied and compared for a hypothetical case study of a river restoration involving multiple stakeholders. Overall, the MCDA results in a systematic, unbiased, and transparent solution, informing restoration alternatives evaluation. The two methods provide comparable results in terms of selected metrics. However, because ProMAA can consider probability distributions for weights and utility values of metrics for each criteria, it is suggested as the best option if data uncertainty is high. Despite the increase in complexity in the metric selection process, MCDA improves upon the current ad-hoc decision practice based on the consultations with stakeholders and experts, and encourages transparent and quantitative aggregation of data and judgement, increasing the transparency of decision making in restoration projects. We believe that MCDA can enhance the overall sustainability of ecosystem by enhancing both ecological and societal needs.