967 resultados para Palaeomagnetism Applied to Tectonics
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Resumo:
This Thesis aims at building and discussing mathematical models applications focused on Energy problems, both on the thermal and electrical side. The objective is to show how mathematical programming techniques developed within Operational Research can give useful answers in the Energy Sector, how they can provide tools to support decision making processes of Companies operating in the Energy production and distribution and how they can be successfully used to make simulations and sensitivity analyses to better understand the state of the art and convenience of a particular technology by comparing it with the available alternatives. The first part discusses the fundamental mathematical background followed by a comprehensive literature review about mathematical modelling in the Energy Sector. The second part presents mathematical models for the District Heating strategic network design and incremental network design. The objective is the selection of an optimal set of new users to be connected to an existing thermal network, maximizing revenues, minimizing infrastructure and operational costs and taking into account the main technical requirements of the real world application. Results on real and randomly generated benchmark networks are discussed with particular attention to instances characterized by big networks dimensions. The third part is devoted to the development of linear programming models for optimal battery operation in off-grid solar power schemes, with consideration of battery degradation. The key contribution of this work is the inclusion of battery degradation costs in the optimisation models. As available data on relating degradation costs to the nature of charge/discharge cycles are limited, we concentrate on investigating the sensitivity of operational patterns to the degradation cost structure. The objective is to investigate the combination of battery costs and performance at which such systems become economic. We also investigate how the system design should change when battery degradation is taken into account.
Resumo:
This thesis proposes a novel technology in the field of swarm robotics that allows a swarm of robots to sense a virtual environment through virtual sensors. Virtual sensing is a desirable and helpful technology in swarm robotics research activity, because it allows the researchers to efficiently and quickly perform experiments otherwise more expensive and time consuming, or even impossible. In particular, we envision two useful applications for virtual sensing technology. On the one hand, it is possible to prototype and foresee the effects of a new sensor on a robot swarm, before producing it. On the other hand, thanks to this technology it is possible to study the behaviour of robots operating in environments that are not easily reproducible inside a lab for safety reasons or just because physically infeasible. The use of virtual sensing technology for sensor prototyping aims to foresee the behaviour of the swarm enhanced with new or more powerful sensors, without producing the hardware. Sensor prototyping can be used to tune a new sensor or perform performance comparison tests between alternative types of sensors. This kind of prototyping experiments can be performed through the presented tool, that allows to rapidly develop and test software virtual sensors of different typologies and quality, emulating the behaviour of several hardware real sensors. By investigating on which sensors is better to invest, a researcher can minimize the sensors’ production cost while achieving a given swarm performance. Through augmented reality, it is possible to test the performance of the swarm in a desired virtual environment that cannot be set into the lab for physical, logistic or economical reasons. The virtual environment is sensed by the robots through properly designed virtual sensors. Virtual sensing technology allows a researcher to quickly carry out real robots experiment in challenging scenarios without all the required hardware and environment.
Resumo:
A global metabolic profiling methodology based on gas chromatography coupled to time-of-flight mass spectrometry (GC-TOFMS) for human plasma was applied to a human exercise study focused on the effects of beverages containing glucose, galactose, or fructose taken after exercise and throughout a recovery period of 6 h and 45 min. One group of 10 well trained male cyclists performed 3 experimental sessions on separate days (randomized, single center). After performing a standardized depletion protocol on a bicycle, subjects consumed one of three different beverages: maltodextrin (MD)+glucose (2:1 ratio), MD+galactose (2:1), and MD+fructose (2:1), consumed at an average of 1.25 g of carbohydrate (CHO) ingested per minute. Blood was taken straight after exercise and every 45 min within the recovery phase. With the resulting blood plasma, insulin, free fatty acid (FFA) profile, glucose, and GC-TOFMS global metabolic profiling measurements were performed. The resulting profiling data was able to match the results obtained from the other clinical measurements with the addition of being able to follow many different metabolites throughout the recovery period. The data quality was assessed, with all the labelled internal standards yielding values of <15% CV for all samples (n=335), apart from the labelled sucrose which gave a value of 15.19%. Differences between recovery treatments including the appearance of galactonic acid from the galactose based beverage were also highlighted.
Resumo:
When estimating the effect of treatment on HIV using data from observational studies, standard methods may produce biased estimates due to the presence of time-dependent confounders. Such confounding can be present when a covariate, affected by past exposure, is both a predictor of the future exposure and the outcome. One example is the CD4 cell count, being a marker for disease progression for HIV patients, but also a marker for treatment initiation and influenced by treatment. Fitting a marginal structural model (MSM) using inverse probability weights is one way to give appropriate adjustment for this type of confounding. In this paper we study a simple and intuitive approach to estimate similar treatment effects, using observational data to mimic several randomized controlled trials. Each 'trial' is constructed based on individuals starting treatment in a certain time interval. An overall effect estimate for all such trials is found using composite likelihood inference. The method offers an alternative to the use of inverse probability of treatment weights, which is unstable in certain situations. The estimated parameter is not identical to the one of an MSM, it is conditioned on covariate values at the start of each mimicked trial. This allows the study of questions that are not that easily addressed fitting an MSM. The analysis can be performed as a stratified weighted Cox analysis on the joint data set of all the constructed trials, where each trial is one stratum. The model is applied to data from the Swiss HIV cohort study.
Resumo:
The Gaussian-3 method developed by Pople and coworkers has been used to calculate the free energy of neutral octamer clusters of water, (H2O)8. The most energetically stable structures are in excellent agreement with those determined from experiment and those predicted from previous high-level calculations. Cubic structures are favored over noncubic structures over all temperature ranges studied. The D2d cubic structure is the lowest free energy structure and dominates the potential energy and free energy hypersurfaces from 0 K to 298 K.
Resumo:
Acetylcholine (ACh) has not been tested for a role in the development of sexual exhaustion in males. However, male hamsters receiving infusions into the medial preoptic area (MPOA) of the muscarinic agonist oxotremorine (OXO) or antagonist scopolamine (SCO) show changes in the postejaculatory interval, one of the measures that changes most consistently as exhaustion approaches. In addition, central SCO treatments cause changes in the patterning of intromissions that resemble those signaling exhaustion. To extend these observations and more thoroughly test the dependence of sexual exhaustion on ACh, male hamsters received MPOA treatments of OXO, SCO or the combination of the two before mating to exhaustion. Relative to placebo, OXO infusions caused small but consistent increases in ejaculation frequency and long intromission latency, delaying the appearance of exhaustion. Scopolamine treatments did the reverse, dramatically accelerating the development of exhaustion. Consistent with and possibly responsible for these changes were effects on the quality of performance prior to exhaustion. These included differences in overall copulatory efficiency (e.g., ejaculations/intromission), which was increased by OXO and decreased by SCO. They also extended to several standard measures of copulatory behavior, including intromission frequency, ejaculation latency and the postejaculatory interval: Most of these were increased by SCO and decreased by OXO. Finally, whereas most or all effects of OXO were counteracted by SCO, most or all of the responses to SCO resisted change by added OXO. This asymmetry in the responses to combined treatment raises the possibility that the effects of these drugs on sexual exhaustion and other elements of male behavior are mediated by distinct muscarinic receptors. Copyright 2013 Elsevier Inc. All rights reserved.
Resumo:
Background The World Health Organization estimates that in sub-Saharan Africa about 4 million HIV-infected patients had started antiretroviral therapy (ART) by the end of 2008. Loss of patients to follow-up and care is an important problem for treatment programmes in this region. As mortality is high in these patients compared to patients remaining in care, ART programmes with high rates of loss to follow-up may substantially underestimate mortality of all patients starting ART. Methods and Findings We developed a nomogram to correct mortality estimates for loss to follow-up, based on the fact that mortality of all patients starting ART in a treatment programme is a weighted average of mortality among patients lost to follow-up and patients remaining in care. The nomogram gives a correction factor based on the percentage of patients lost to follow-up at a given point in time, and the estimated ratio of mortality between patients lost and not lost to follow-up. The mortality observed among patients retained in care is then multiplied by the correction factor to obtain an estimate of programme-level mortality that takes all deaths into account. A web calculator directly calculates the corrected, programme-level mortality with 95% confidence intervals (CIs). We applied the method to 11 ART programmes in sub-Saharan Africa. Patients retained in care had a mortality at 1 year of 1.4% to 12.0%; loss to follow-up ranged from 2.8% to 28.7%; and the correction factor from 1.2 to 8.0. The absolute difference between uncorrected and corrected mortality at 1 year ranged from 1.6% to 9.8%, and was above 5% in four programmes. The largest difference in mortality was in a programme with 28.7% of patients lost to follow-up at 1 year. Conclusions The amount of bias in mortality estimates can be large in ART programmes with substantial loss to follow-up. Programmes should routinely report mortality among patients retained in care and the proportion of patients lost. A simple nomogram can then be used to estimate mortality among all patients who started ART, for a range of plausible mortality rates among patients lost to follow-up.
Resumo:
The project aimed to use results of contamination of city vegetation with heavy metals and sulphur compounds as the basis for analysing the integral response of trees and shrubs to contamination, through a complex method of phytoindication. The results were used to draw up recommendations on pollution reduction in the city and to develop the method of phytoindication as a means of monitoring environmental pollution in St. Petersburg and other large cities. Field investigations were carried out in August 1996, and 66 descriptions of green areas were made in order to estimate the functional state of plants in the Vasileostrovsky district. Investigations of the spectrum reflecting properties of plants showed considerable variation of albedo meanings of leaves under the influence of various internal and external factors. The results indicated that lime trees most closely reflect the condition of the environment. Practically all the green areas studied were in poor condition, the only exceptions being areas of ash trees, which are more resistant to environmental pollution, and one lime-tree alley in a comparatively unpolluted street. The study identified those types of trees which are more or less resistant to complex environmental pollution and Ms. Terekhina recommends that the species in the present green areas be changed to include a higher number of the more resistant species. The turbidimetric analysis of tree barks for sulphates gave an indication of the level and spatial distribution of each pollutant, and the results also confirmed other findings that electric conductivity is a significant feature in determining the extent of sulphate pollution. In testing for various metals, the lime tree showed the highest contents for all elements except magnesium, copper, zinc, cadmium and strontium, again confirming the species' vulnerability to pollution. Medium rates of concentration in the city and environs showed that city plants concentrate 3 times as many different elements and 10 times more chromium, copper and lead than do those in the suburbs. The second stage of the study was based on the concept of phytoindication, which presupposes that changes in the relation of chemical elements in regional biological circulation under the influence of technogenesis provide a criterion for predicting displacements in people's health. There are certain basic factors in this concept. The first is that all living beings are related ecologically as well as by their evolutionary origin, and that the lower an organism is on the evolutionary scale, the less adaptational reserve it has. The second is that smaller concentrations of chemical elements are needed for toxicological influence on plants than on people and so the former's reactions to geochemical factors are easier to characterise. Visual indicational features of urban plants are well defined and can form the basis of a complex "environment - public health" analysis. Specific plant reactions reflecting atmospheric pollution and other components of urbogeosystems make it possible to determine indication criteria for predicting possible disturbances in the general state of health of the population. Thirdly the results of phytoindication investigations must be taken together with information about public health in the area. It only proved possibly to analyse general indexes of public health based on statistical data from the late 1980s and early 1990s as the data of later years were greatly influenced by social factors. These data show that the rates of illness in St. Petersburg (especially for children) are higher than in Russia as a whole, for most classes of diseases, indicating that the population there is more sensitive to the ecological state of the urban environment. The Vasileostrovsky district had the second highest sick rate for adullts, while the rate of infant mortality in the first year of life was highest there. Ms. Terekhina recommends further studies to more precisely assess the effectiveness of the methods she tested, but has drawn up a proposed map of environmental hazard for the population, taking into account prevailing wind directions.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
Resumo:
As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.