896 resultados para Pattern-based interaction models
Resumo:
This thesis studies molecular dynamics simulations on two levels of resolution: the detailed level of atomistic simulations, where the motion of explicit atoms in a many-particle system is considered, and the coarse-grained level, where the motion of superatoms composed of up to 10 atoms is modeled. While atomistic models are capable of describing material specific effects on small scales, the time and length scales they can cover are limited due to their computational costs. Polymer systems are typically characterized by effects on a broad range of length and time scales. Therefore it is often impossible to atomistically simulate processes, which determine macroscopic properties in polymer systems. Coarse-grained (CG) simulations extend the range of accessible time and length scales by three to four orders of magnitude. However, no standardized coarse-graining procedure has been established yet. Following the ideas of structure-based coarse-graining, a coarse-grained model for polystyrene is presented. Structure-based methods parameterize CG models to reproduce static properties of atomistic melts such as radial distribution functions between superatoms or other probability distributions for coarse-grained degrees of freedom. Two enhancements of the coarse-graining methodology are suggested. Correlations between local degrees of freedom are implicitly taken into account by additional potentials acting between neighboring superatoms in the polymer chain. This improves the reproduction of local chain conformations and allows the study of different tacticities of polystyrene. It also gives better control of the chain stiffness, which agrees perfectly with the atomistic model, and leads to a reproduction of experimental results for overall chain dimensions, such as the characteristic ratio, for all different tacticities. The second new aspect is the computationally cheap development of nonbonded CG potentials based on the sampling of pairs of oligomers in vacuum. Static properties of polymer melts are obtained as predictions of the CG model in contrast to other structure-based CG models, which are iteratively refined to reproduce reference melt structures. The dynamics of simulations at the two levels of resolution are compared. The time scales of dynamical processes in atomistic and coarse-grained simulations can be connected by a time scaling factor, which depends on several specific system properties as molecular weight, density, temperature, and other components in mixtures. In this thesis the influence of molecular weight in systems of oligomers and the situation in two-component mixtures is studied. For a system of small additives in a melt of long polymer chains the temperature dependence of the additive diffusion is predicted and compared to experiments.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
Resumo:
Le variabili ambientali e lo sfruttamento della pesca sono dei possibili fattori nel determinare la struttura della comunità demersale. L’area di studio è il Golfo di Antalya, con un area aperta ed una chiusa ad ogni attività di pesca, il periodo di studio ha coperto tre stagioni (primavera, estate, autunno). Lo scopo è quello di delineare un quadro generale sulla distribuzione spaziale e temporale delle risorse alieutiche demersali in quest’area. In questo lavoro di tesi la PCA è stata usata al fine di determinare le variabili ambientali (ossigeno, salinità, temperatura, pH, materia sospesa) che determinano maggiormente la differenza tra le stazioni, tecniche di analisi multivariata hanno invece indagato una possibile variazione su scala spaziale e temporale dei parametri abiotici. La Cluster Analysis effettuata sui dati di abbondanza ha delineato quattro raggruppamenti principali, due ad una profondità minore di 100 m e due ad una profondità maggiore (40% di similarità). Questi risultati sono confermati dall’analisi MDS. L’analisi SIMPER ha messo in evidenza le specie che maggiormente incidono sulla differenza tra strati di profondità. Gli indici di biodiversità sono stati calcolati per indagare la diversità e la variabilità temporale e spaziale della comunità demersale. Due procedure la BIO-ENV e la DistLM (Distance-based linear models) sono state effettuate per individuare le variabili abiotiche che potrebbero essere responsabili dei diversi raggruppamenti nella struttura del popolamento demersale. Le specie commerciali: Mullus barbatus, Upeneus moluccensis, Upeneus pori sono state prese come oggetto per la ricerca di possibili effetti della pesca a livello di popolazione. Per i dati di abbondanza e di biomassa di queste specie è stata eseguita l’analisi multivariata MANOVA (Multivariate Analysis of Variance) al fine di trovare eventuali variazioni dovute ai fattori profondità, stagione e transetto. Per ogni specie è stata valutata la sex ratio. Il metodo Bhattacharya ha permesso di determinare le classi di età e la loro abbondanza. In ultimo la relazione peso-lunghezza è stata ricavata separatamente per gli individui maschi e femmine al fine di determinare il tipo di crescita per ogni sesso.
Resumo:
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.
Resumo:
Fossils of chironomid larvae (non-biting midges) preserved in lake sediments are well-established palaeotemperature indicators which, with the aid of numerical chironomid-based inference models (transfer functions), can provide quantitative estimates of past temperature change. This approach to temperature reconstruction relies on the strong relationship between air and lake surface water temperature and the distribution of individual chironomid taxa (species, species groups, genera) that has been observed in different climate regions (arctic, subarctic, temperate and tropical) in both the Northern and Southern hemisphere. A major complicating factor for the use of chironomids for palaeoclimate reconstruction which increases the uncertainty associated with chironomid-based temperature estimates is that the exact nature of the mechanism responsible for the strong relationship between temperature and chironomid assemblages in lakes remains uncertain. While a number of authors have provided state of the art overviews of fossil chironomid palaeoecology and the use of chironomids for temperature reconstruction, few have focused on examining the ecological basis for this approach. Here, we review the nature of the relationship between chironomids and temperature based on the available ecological evidence. After discussing many of the surveys describing the distribution of chironomid taxa in lake surface sediments in relation to temperature, we also examine evidence from laboratory and field studies exploring the effects of temperature on chironomid physiology, life cycles and behaviour. We show that, even though a direct influence of water temperature on chironomid development, growth and survival is well described, chironomid palaeoclimatology is presently faced with the paradoxical situation that the relationship between chironomid distribution and temperature seems strongest in relatively deep, thermally stratified lakes in temperate and subarctic regions in which the benthic chironomid fauna lives largely decoupled from the direct influence of air and surface water temperature. This finding suggests that indirect effects of temperature on physical and chemical characteristics of lakes play an important role in determining the distribution of lake-living chironomid larvae. However, we also demonstrate that no single indirect mechanism has been identified that can explain the strong relationship between chironomid distribution and temperature in all regions and datasets presently available. This observation contrasts with the previously published hypothesis that climatic effects on lake nutrient status and productivity may be largely responsible for the apparent correlation between chironomid assemblage distribution and temperature. We conclude our review by summarizing the implications of our findings for chironomid-based palaeoclimatology and by pointing towards further avenues of research necessary to improve our mechanistic understanding of the chironomid-temperature relationship.
Resumo:
In 2009, the International Commission on Radiological Protection issued a statement on radon which stated that the dose conversion factor for radon progeny would likely double, and the calculation of risk from radon should move to a dosimetric approach, rather than the longstanding epidemiological approach. Through the World Nuclear Association, whose members represent over 90% of the world's uranium production, industry has been examining this issue with a goal of offering expertise and knowledge to assist with the practical implementation of these evolutionary changes to evaluating the risk from radon progeny. Industry supports the continuing use of the most current epidemiological data as a basis for risk calculation, but believes that further examination of these results is needed to better understand the level of conservatism in the potential epidemiological-based risk models. With regard to adoption of the dosimetric approach, industry believes that further work is needed before this is a practical option. In particular, this work should include a clear demonstration of the validation of the dosimetric model which includes how smoking is handled, the establishment of a practical measurement protocol, and the collection of relevant data for modern workplaces. Industry is actively working to address the latter two items.
Resumo:
11beta-Hydroxysteroid dehydrogenase (11beta-HSD) enzymes catalyze the conversion of biologically inactive 11-ketosteroids into their active 11beta-hydroxy derivatives and vice versa. Inhibition of 11beta-HSD1 has considerable therapeutic potential for glucocorticoid-associated diseases including obesity, diabetes, wound healing, and muscle atrophy. Because inhibition of related enzymes such as 11beta-HSD2 and 17beta-HSDs causes sodium retention and hypertension or interferes with sex steroid hormone metabolism, respectively, highly selective 11beta-HSD1 inhibitors are required for successful therapy. Here, we employed the software package Catalyst to develop ligand-based multifeature pharmacophore models for 11beta-HSD1 inhibitors. Virtual screening experiments and subsequent in vitro evaluation of promising hits revealed several selective inhibitors. Efficient inhibition of recombinant human 11beta-HSD1 in intact transfected cells as well as endogenous enzyme in mouse 3T3-L1 adipocytes and C2C12 myotubes was demonstrated for compound 27, which was able to block subsequent cortisol-dependent activation of glucocorticoid receptors with only minor direct effects on the receptor itself. Our results suggest that inhibitor-based pharmacophore models for 11beta-HSD1 in combination with suitable cell-based activity assays, including such for related enzymes, can be used for the identification of selective and potent inhibitors.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
High-resolution and highly precise age models for recent lake sediments (last 100–150 years) are essential for quantitative paleoclimate research. These are particularly important for sedimentological and geochemical proxies, where transfer functions cannot be established and calibration must be based upon the relation of sedimentary records to instrumental data. High-precision dating for the calibration period is most critical as it determines directly the quality of the calibration statistics. Here, as an example, we compare radionuclide age models obtained on two high-elevation glacial lakes in the Central Chilean Andes (Laguna Negra: 33°38′S/70°08′W, 2,680 m a.s.l. and Laguna El Ocho: 34°02′S/70°19′W, 3,250 m a.s.l.). We show the different numerical models that produce accurate age-depth chronologies based on 210Pb profiles, and we explain how to obtain reduced age-error bars at the bottom part of the profiles, i.e., typically around the end of the 19th century. In order to constrain the age models, we propose a method with five steps: (i) sampling at irregularly-spaced intervals for 226Ra, 210Pb and 137Cs depending on the stratigraphy and microfacies, (ii) a systematic comparison of numerical models for the calculation of 210Pb-based age models: constant flux constant sedimentation (CFCS), constant initial concentration (CIC), constant rate of supply (CRS) and sediment isotope tomography (SIT), (iii) numerical constraining of the CRS and SIT models with the 137Cs chronomarker of AD 1964 and, (iv) step-wise cross-validation with independent diagnostic environmental stratigraphic markers of known age (e.g., volcanic ash layer, historical flood and earthquakes). In both examples, we also use airborne pollutants such as spheroidal carbonaceous particles (reflecting the history of fossil fuel emissions), excess atmospheric Cu deposition (reflecting the production history of a large local Cu mine), and turbidites related to historical earthquakes. Our results show that the SIT model constrained with the 137Cs AD 1964 peak performs best over the entire chronological profile (last 100–150 years) and yields the smallest standard deviations for the sediment ages. Such precision is critical for the calibration statistics, and ultimately, for the quality of the quantitative paleoclimate reconstruction. The systematic comparison of CRS and SIT models also helps to validate the robustness of the chronologies in different sections of the profile. Although surprisingly poorly known and under-explored in paleolimnological research, the SIT model has a great potential in paleoclimatological reconstructions based on lake sediments
Resumo:
Today, pupils at the age of 15 have spent their entire life surrounded by and interacting with diverse forms of computers. It is a routine part of their day-to-day life and by now computer-literacy is common at very early age. Over the past five years, technology for teens has become predominantly mobile and ubiquitous within every aspect of their lives. To them, being online is an implicitness. In Germany, 88% of youth aged between 12-19 years own a smartphone and about 20% use the Internet via tablets. Meanwhile, more and more young learners bring their devices into the classroom and pupils increasingly demand for innovative and motivating learning scenarios that strongly respond to their habits of using media. With this development, a shift of paradigm is slowly under way with regard to the use of mobile technology in education. By now, a large body of literature exists, that reports concepts, use-cases and practical studies for effectively using technology in education. Within this field, a steadily growing body of research has developed that especially examines the use of digital games as instructional strategy. The core concern of this thesis is the design of mobile games for learning. The conditions and requirements that are vital in order to make mobile games suitable and effective for learning environments are investigated. The base for exploration is the pattern approach as an established form of templates that provide solutions for recurrent problems. Building on this acknowledged form of exchanging and re-using knowledge, patterns for game design are used to classify the many gameplay rules and mechanisms in existence. This research draws upon pattern descriptions to analyze learning game concepts and to abstract possible relationships between gameplay patterns and learning outcomes. The linkages that surface are the starting bases for a series of game design concepts and their implementations are subsequently evaluated with regard to learning outcomes. The findings and resulting knowledge from this research is made accessible by way of implications and recommendations for future design decisions.
Resumo:
Article preview View full access options BoneKEy Reports | Review Print Email Share/bookmark Finite element analysis for prediction of bone strength Philippe K Zysset, Enrico Dall'Ara, Peter Varga & Dieter H Pahr Affiliations Corresponding author BoneKEy Reports (2013) 2, Article number: 386 (2013) doi:10.1038/bonekey.2013.120 Received 03 January 2013 Accepted 25 June 2013 Published online 07 August 2013 Article tools Citation Reprints Rights & permissions Abstract Abstract• References• Author information Finite element (FE) analysis has been applied for the past 40 years to simulate the mechanical behavior of bone. Although several validation studies have been performed on specific anatomical sites and load cases, this study aims to review the predictability of human bone strength at the three major osteoporotic fracture sites quantified in recently completed in vitro studies at our former institute. Specifically, the performance of FE analysis based on clinical computer tomography (QCT) is compared with the ones of the current densitometric standards, bone mineral content, bone mineral density (BMD) and areal BMD (aBMD). Clinical fractures were produced in monotonic axial compression of the distal radii, vertebral sections and in side loading of the proximal femora. QCT-based FE models of the three bones were developed to simulate as closely as possible the boundary conditions of each experiment. For all sites, the FE methodology exhibited the lowest errors and the highest correlations in predicting the experimental bone strength. Likely due to the improved CT image resolution, the quality of the FE prediction in the peripheral skeleton using high-resolution peripheral CT was superior to that in the axial skeleton with whole-body QCT. Because of its projective and scalar nature, the performance of aBMD in predicting bone strength depended on loading mode and was significantly inferior to FE in axial compression of radial or vertebral sections but not significantly inferior to FE in side loading of the femur. Considering the cumulated evidence from the published validation studies, it is concluded that FE models provide the most reliable surrogates of bone strength at any of the three fracture sites.
Resumo:
In this paper we examined whether defenders of victims of school bullying befriended similar peers, and whether the similarity is due to selection or influence processes or both. We examined whether these processes result in different degrees of similarity between peers depending on teachers’ self-efficacy and the school climate. We analyzed longitudinal data of 478 Swiss school students employing actor-based stochastic models. Our analyses showed that similarity in defending behavior among friends was due to selection rather than influence. The extent to which adolescents selected peers showing similar defending behavior was related to contextual factors. In fact, lower self-efficacy of teachers and positive school climate were associated with increased selection effects in terms of defending behavior.
Resumo:
Species extinctions are biased towards higher trophic levels, and primary extinctions are often followed by unexpected secondary extinctions. Currently, predictions on the vulnerability of ecological communities to extinction cascades are based on models that focus on bottom-up effects, which cannot capture the effects of extinctions at higher trophic levels. We show, in experimental insect communities, that harvesting of single carnivorous parasitoid species led to a significant increase in extinction rate of other parasitoid species, separated by four trophic links. Harvesting resulted in the release of prey from top-down control, leading to increased interspecific competition at the herbivore trophic level. This resulted in increased extinction rates of non-harvested parasitoid species when their host had become rare relative to other herbivores. The results demonstrate a mechanism for horizontal extinction cascades, and illustrate that altering the relationship between a predator and its prey can cause wide-ranging ripple effects through ecosystems, including unexpected extinctions.
Resumo:
Recent findings demonstrate that trees in deserts are efficient carbon sinks. It remains however unknown whether the Clean Development Mechanism will accelerate the planting of trees in Non Annex I dryland countries. We estimated the price of carbon at which a farmer would be indifferent between his customary activity and the planting of trees to trade carbon credits, along an aridity gradient. Carbon yields were simulated by means of the CO2FIX v3.1 model for Pinus halepensis with its respective yield classes along the gradient (Arid – 100mm to Dry Sub Humid conditions – 900mm). Wheat and pasture yields were predicted on somewhat similar nitrogen-based quadratic models, using 30 years of weather data to simulate moisture stress. Stochastic production, input and output prices were afterwards simulated on a Monte Carlo matrix. Results show that, despite the high levels of carbon uptake, carbon trading by afforesting is unprofitable anywhere along the gradient. Indeed, the price of carbon would have to raise unrealistically high, and the certification costs would have to drop significantly, to make the Clean Development Mechanism worthwhile for non annex I dryland countries farmers. From a government agency's point of view the Clean Development Mechanism is attractive. However, such agencies will find it difficult to demonstrate “additionality”, even if the rule may be somewhat flexible. Based on these findings, we will further discuss why the Clean Development Mechanism, a supposedly pro-poor instrument, fails to assist farmers in Non Annex I dryland countries living at minimum subsistence level.