974 resultados para Sharp-interface Model
Resumo:
Study Design. An experimental animal study. Objective. To investigate histomorphometric and radiographical changes in the BB.4S rat model after PEEK (polyetheretherketone) nonfusion interspinous device implantation. Summary of Background Data. Clinical effectiveness of the PEEK nonfusion spine implant Wallis (Abbott, Bordeaux, France; now Zimmer, Warsaw, IN) is well documented. However, there is a lack of evidence on the long-term effects of this implant on bone, in particular its influence on structural changes of bone elements of the lumbar spine. Methods. Twenty-four male BB.4S rats aged 11 weeks underwent surgery for implantation of a PEEK nonfusion interspinous device or for a sham procedure in 3 groups of 8 animals each: 1) implantation at level L4–L5; 2) implantation at level L5–L6; and 3) sham surgery. Eleven weeks postoperatively osteolyses at the implant-bone interface were measured via radiograph, bone mineral density of vertebral bodies was analyzed using osteodensitometry, and bone mineral content as well as resorption of the spinous processes were examined by histomorphometry. Results. Resorption of the spinous processes at the site of the interspinous implant was found in all treated segments. There was no significant difference in either bone density of vertebral bodies or histomorphometric structure of the spinous processes between adjacent vertebral bodies, between treated and untreated segments and between groups. Conclusion. These findings indicate that resorption of spinous processes because of a result of implant loosening, inhibit the targeted load redistribution through the PEEK nonfusion interspinous device in the lumbar spinal segment of the rat. This leads to reduced long-term stability of the implant in the animal model. These results suggest that PEEK nonfusion interspinous devices like the Wallis implants may have time-limited effects and should only be used for specified indications.
Resumo:
Previous studies in our laboratory have indicated that heparan sulfate proteoglycans (HSPGs) play an important role in murine embryo implantation. To investigate the potential function of HSPGs in human implantation, two human cell lines (RL95 and JAR) were selected to model uterine epithelium and embryonal trophectoderm, respectively. A heterologous cell-cell adhesion assay showed that initial binding between JAR and RL95 cells is mediated by cell surface glycosaminoglycans (GAG) with heparin-like properties, i.e., heparan sulfate and dermatan sulfate. Furthermore, a single class of highly specific, protease-sensitive heparin/heparan sulfate binding sites exist on the surface of RL95 cells. Three heparin binding, tryptic peptide fragments were isolated from RL95 cell surfaces and their amino termini partially sequenced. Reverse transcription-polymerase chain reaction (RT-PCR) generated 1 to 4 PCR products per tryptic peptide. Northern blot analysis of RNA from RL95 cells using one of these RT-PCR products identified a 1.2 Kb mRNA species (p24). The amino acid sequence predicted from the cDNA sequence contains a putative heparin-binding domain. A synthetic peptide representing this putative heparin binding domain was used to generate a rabbit polyclonal antibody (anti-p24). Indirect immunofluorescence studies on RL95 and JAR cells as well as binding studies of anti-p24 to intact RL95 cells demonstrate that p24 is distributed on the cell surface. Western blots of RL95 membrane preparations identify a 24 kDa protein (p24) highly enriched in the 100,000 g pellet plasma membrane-enriched fraction. p24 eluted from membranes with 0.8 M NaCl, but not 0.6 M NaCl, suggesting that it is a peripheral membrane component. Solubilized p24 binds heparin by heparin affinity chromatography and $\sp{125}$I-heparin binding assays. Furthermore, indirect immunofluorescence studies indicate that cytotrophoblast of floating and attached villi of the human fetal-maternal interface are recognized by anti-p24. The study also indicates that the HSPG, perlecan, accumulates where chorionic villi are attached to uterine stroma and where p24-expressing cytotrophoblast penetrate the stroma. Collectively, these data indicate that p24 is a cell surface membrane-associated heparin/heparan sulfate binding protein found in cytotrophoblast, but not many other cell types of the fetal-maternal interface. Furthermore, p24 colocalizes with HSPGs in regions of cytotrophoblast invasion. These observations are consistent with a role for HSPGs and HSPG binding proteins in human trophoblast-uterine cell interactions. ^
Resumo:
The liquid–vapor interface is difficult to access experimentally but is of interest from a theoretical and applied point of view and has particular importance in atmospheric aerosol chemistry. Here we examine the liquid–vapor interface for mixtures of water, sodium chloride, and formic acid, an abundant chemical in the atmosphere. We compare the results of surface tension and X-ray photoelectron spectroscopy (XPS) measurements over a wide range of formic acid concentrations. Surface tension measurements provide a macroscopic characterization of solutions ranging from 0 to 3 M sodium chloride and from 0 to over 0.5 mole fraction formic acid. Sodium chloride was found to be a weak salting out agent for formic acid with surface excess depending only slightly on salt concentration. In situ XPS provides a complementary molecular level description about the liquid–vapor interface. XPS measurements over an experimental probe depth of 51 Å gave the C 1s to O 1s ratio for both total oxygen and oxygen from water. XPS also provides detailed electronic structure information that is inaccessible by surface tension. Density functional theory calculations were performed to understand the observed shift in C 1s binding energies to lower values with increasing formic acid concentration. Part of the experimental −0.2 eV shift can be assigned to the solution composition changing from predominantly monomers of formic acid to a combination of monomers and dimers; however, the lack of an appropriate reference to calibrate the absolute BE scale at high formic acid mole fraction complicates the interpretation. Our data are consistent with surface tension measurements yielding a significantly more surface sensitive measurement than XPS due to the relatively weak propensity of formic acid for the interface. A simple model allowed us to replicate the XPS results under the assumption that the surface excess was contained in the top four angstroms of solution.
Resumo:
At first sight, experimenting and modeling form two distinct modes of scientific inquiry. This spurs philosophical debates about how the distinction should be drawn (e.g. Morgan 2005, Winsberg 2009, Parker 2009). But much scientific practice casts serious doubts on the idea that the distinction makes much sense. There are two worries. First, the practices of modeling and experimenting are often intertwined in intricate ways because much modeling involves experimenting, and the interpretation of many experiments relies upon models. Second, there are borderline cases that seem to blur the distinction between experiment and model (if there is any). My talk tries to defend the philosophical project of distinguishing models from experiment and to advance the related philosophical debate. I begin with providing a minimalist framework of conceptualizing experimenting and modeling and their mutual relationships. The methods are conceptualized as different types of activities that are characterized by a primary goal, respectively. The minimalist framwork, which should be uncontroversial, suffices to accommodate the first worry. I address the second worry by suggesting several ways how to conceptualize the distinction in a more flexible way. I make a concrete suggestion of how the distinction may be drawn. I use examples from the history of science to argue my case. The talk concentrates and models and experiments, but I will comment on simulations too.
Resumo:
Though E2F1 is deregulated in most human cancers by mutations of the p16-cyclin D-Rb pathway, it also exhibits tumor suppressive activity. A transgenic mouse model overexpressing E2F1 under the control of the bovine keratin 5 (K5) promoter exhibits epidermal hyperplasia and spontaneously develops tumors in the skin and other epithelial tissues after one year of age. In a p53-deficient background, aberrant apoptosis in K5 E2F1 transgenic epidermis is reduced and tumorigenesis is accelerated. In sharp contrast, K5 E2F1 transgenic mice are resistant to papilloma formation in the DMBA/TPA two-stage carcinogenesis protocol. K5 E2F4 and K5 DP1 transgenic mice were also characterized and both display epidermal hyperplasia but do not develop spontaneous tumors even in cooperation with p53 deficiency. These transgenic mice do not have increased levels of apoptosis in their skin and are more susceptible to papilloma formation in the two-stage carcinogenesis model. These studies show that deregulated proliferation does not necessarily lead to tumor formation and that the ability to suppress skin carcinogenesis is unique to E2F1. E2F1 can also suppress skin carcinogenesis when okadaic acid is used as the tumor promoter and when a pre-initiated mouse model is used, demonstrating that E2F1's tumor suppressive activity is not specific for TPA and occurs at the promotion stage. E2F1 was thought to induce p53-dependent apoptosis through upregulation of p19ARF tumor suppressor, which inhibits mdm2-mediated p53 degradation. Consistent with in vitro studies, the overexpression of E2F1 in mouse skin results in the transcriptional activation of the p19ARF and the accumulation of p53. Inactivation of either p19ARF or p53 restores the sensitivity of K5 E2F1 transgenic mice to DMBA/TPA carcinogenesis, demonstrating that an intact p19ARF-p53 pathway is necessary for E2F1 to suppress carcinogenesis. Surprisingly, while p53 is required for E2F1 to induce apoptosis in mouse skin, p19ARF is not, and inactivation of p19ARF actually enhances E2F1-induced apoptosis and proliferation in transgenic epidermis. This indicates that ARF is important for E2F1-induced tumor suppression but not apoptosis. Senescence is another potential mechanism of tumor suppression that involves p53 and p19ARF. K5 E2F1 transgenic mice initiated with DMBA and treated with TPA show an increased number of senescence cells in their epidermis. These experiments demonstrate that E2F1's unique tumor suppressive activity in two-stage skin carcinogenesis can be genetically separated from E2F1-induced apoptosis and suggest that senescence utilizing the p19ARF-p53 pathway plays a role in tumor suppression by E2F1. ^
Resumo:
Early diagenetic dolomite beds were sampled during the Ocean Drilling Programme (ODP) Leg 201 at four reoccupied ODP Leg 112 sites on the Peru continental margin (Sites 1227/684, 1228/680, 1229/681 and 1230/685) and analysed for petrography, mineralogy, d13C, d18O and 87Sr/86Sr values. The results are compared with the chemistry, and d13C and 87Sr/86Sr values of the associated porewater. Petrographic relationships indicate that dolomite forms as a primary precipitate in porous diatom ooze and siliciclastic sediment and is not replacing the small amounts of precursor carbonate. Dolomite precipitation often pre-dates the formation of framboidal pyrite. Most dolomite layers show 87Sr/86Sr-ratios similar to the composition of Quaternary seawater and do not indicate a contribution from the hypersaline brine, which is present at a greater burial depth. Also, the d13C values of the dolomite are not in equilibrium with the d13C values of the dissolved inorganic carbon in the associated modern porewater. Both petrography and 87Sr/86Sr ratios suggest a shallow depth of dolomite formation in the uppermost sediment (<30 m below the seafloor). A significant depletion in the dissolved Mg and Ca in the porewater constrains the present site of dolomite precipitation, which co-occurs with a sharp increase in alkalinity and microbial cell concentration at the sulphate-methane interface. It has been hypothesized that microbial 'hot-spots', such as the sulphate-methane interface, may act as focused sites of dolomite precipitation. Varying d13C values from -15 per mil to +15 per mil for the dolomite are consistent with precipitation at a dynamic sulphate-methane interface, where d13C of the dissolved inorganic carbon would likewise be variable. A dynamic deep biosphere with upward and downward migration of the sulphate-methane interface can be simulated using a simple numerical diffusion model for sulphate concentration in a sedimentary sequence with variable input of organic matter. Thus, the study of dolomite layers in ancient organic carbon-rich sedimentary sequences can provide a useful window into the palaeo-dynamics of the deep biosphere.
Resumo:
Upwelling along the western coast of Africa south of the equator may be partitioned into three major areas, each having its own dynamics and history: (1) the eastern equatorial region, comprising the Congo Fan and the area of Mid-Angola; (2) the Namibia upwelling system, extending from the Walvis Ridge to Lüderitz; and (3) the Cape Province region, where upwelling is subdued. The highest nutrient contents in thermocline waters are in the northern region, the lowest in the southern one. Wind effects are at a maximum near the southern end of the Namibia upwelling system, and maximum productivity occurs near Walvis Bay, where the product between upwelling rate and nutrient content of upwelled waters is at a maximum. In the Congo/Angola region, opal tends to follow organic carbon quite closely in the Quaternary record. However, organic carbon has a strong precessional component, while opal does not. Despite relatively low opal content, sediments off Angola show the same patterns as those off the Congo; thus, they are part of the same regime. The spectrum shows nonlinear interference patterns between high- and low-latitude forcing, presumably tied to thermocline fertility and wind. On Walvis Ridge, as in the Congo-Angola region, the organic matter record behaves normally; that is, supply is high during glacial periods. In contrast, interglacial periods are favorable for opal deposition. The pattern suggests reduction in silicate content of the thermocline during glacial periods. The reversed phase (opal abundant during interglacials) persists during the entire Pleistocene and can be demonstrated deep into the Pliocene, not just on Walvis Ridge but all the way to the Oranje River and off the Cape Province. From comparison with other regions, it appears that silicate is diminished in the global thermocline, on average, whenever winds become strong enough to substantially shorten the residence time of silicate in upper waters (Walvis Hypothesis, solving the Walvis Paradox of reversed phase in opal deposition). The central discovery during Leg 175 was the documentation of a late Pliocene opal maximum for the entire Namibia upwelling system (early Matuyama Diatom Maximum [MDM]). The maximum is centered on the period between the end of the Gauss Chron and the beginning of the Olduvai Chron. A rather sharp increase in both organic matter deposition and opal deposition occurs near 3 Ma in the middle of the Gauss Chron, in association with a series of major cooling steps. As concerns organic matter, high production persists at least to 1 Ma, when there are large changes in variability, heralding subsequent pulsed production periods. From 3 to 2 Ma, organic matter and opal deposition run more or less parallel, but after 2 Ma opal goes out of phase with organic matter. Apparently, this is the point when silicate becomes limiting to opal production. Thus, the MDM conundrum is solved by linking planetary cooling to increased mixing and upwelling (ramping up to the MDM) and a general removal of silicate from the upper ocean through excess precipitation over global supply (ramping down from the MDM). The hypothesis concerning the origin of the Namibia opal acme or MDM is fundamentally the same as the Walvis Hypothesis, stating that glacial conditions result in removal of silicate from the thermocline (and quite likely from the ocean as a whole, given enough time). The Namibia opal acme, and other opal maxima in the latest Neogene in other regions of the ocean, marks the interval when a cooling ocean selectively removes the abundant silicate inherited from a warm ocean. When the excess silicate is removed, the process ceases. According to the data gathered during Leg 175, major upwelling started in the late part of the late Miocene. Presumably, this process contributed to the drawing down of carbon dioxide from the atmosphere, helping to prepare the way for Northern Hemisphere glaciation.
Resumo:
Easing of economic sanctions by Western countries in 2012 augmented the prospect that Myanmar will expand its exports. On the other hand, a sharp rise in natural resource exports during the sanctions brings in a concern about the "Dutch disease". This study projects Myanmar's export potential by calculating counterfactual export values with an augmented gravity model that takes into account the effects of natural resource exports on non-resource exports. Without taking into account the effects of natural resource exports, the counterfactual predicted values of non-resource exports during 2004–2011 are more than five times larger than the actual exports. If we take into account the effects, however, the predicted values are smaller than the actual exports. The empirical results imply that the "Dutch disease" is at stake in Myanmar than any other Southeast Asian countries.
Resumo:
Amundsenisen is an ice field, 80 km2 in area, located in Southern Spitsbergen, Svalbard. Radio-echo sounding measurements at 20 MHz show high intensity returns from a nearly flat basal reflector at four zones, all of them with ice thickness larger than 500m. These reflections suggest possible subglacial lakes. To determine whether basal liquid water is compatible with current pressure and temperature conditions, we aim at applying a thermo mechanical model with a free boundary at the bed defined as solution of a Stefan problem for the interface ice-subglaciallake. The complexity of the problem suggests the use of a bi-dimensional model, but this requires that well-defined flowlines across the zones with suspected subglacial lakes are available. We define these flow lines from the solution of a three-dimensional dynamical model, and this is the main goal of the present contribution. We apply a three-dimensional full-Stokes model of glacier dynamics to Amundsenisen icefield. We are mostly interested in the plateau zone of the icefield, so we introduce artificial vertical boundaries at the heads of the main outlet glaciers draining Amundsenisen. At these boundaries we set velocity boundary conditions. Velocities near the centres of the heads of the outlets are known from experimental measurements. The velocities at depth are calculated according to a SIA velocity-depth profile, and those at the rest of the transverse section are computed following Nye’s (1952) model. We select as southeastern boundary of the model domain an ice divide, where we set boundary conditions of zero horizontal velocities and zero vertical shear stresses. The upper boundary is a traction-free boundary. For the basal boundary conditions, on the zones of suspected subglacial lakes we set free-slip boundary conditions, while for the rest of the basal boundary we use a friction law linking the sliding velocity to the basal shear stress,in such a way that, contrary to the shallow ice approximation, the basal shear stress is not equal to the basal driving stress but rather part of the solution.
Resumo:
In spite of the increasing presence of Semantic Web Facilities, only a limited amount of the available resources in the Internet provide a semantic access. Recent initiatives such as the emerging Linked Data Web are providing semantic access to available data by porting existing resources to the semantic web using different technologies, such as database-semantic mapping and scraping. Nevertheless, existing scraping solutions are based on ad-hoc solutions complemented with graphical interfaces for speeding up the scraper development. This article proposes a generic framework for web scraping based on semantic technologies. This framework is structured in three levels: scraping services, semantic scraping model and syntactic scraping. The first level provides an interface to generic applications or intelligent agents for gathering information from the web at a high level. The second level defines a semantic RDF model of the scraping process, in order to provide a declarative approach to the scraping task. Finally, the third level provides an implementation of the RDF scraping model for specific technologies. The work has been validated in a scenario that illustrates its application to mashup technologies
Resumo:
Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Background Most aerial plant parts are covered with a hydrophobic lipid-rich cuticle, which is the interface between the plant organs and the surrounding environment. Plant surfaces may have a high degree of hydrophobicity because of the combined effects of surface chemistry and roughness. The physical and chemical complexity of the plant cuticle limits the development of models that explain its internal structure and interactions with surface-applied agrochemicals. In this article we introduce a thermodynamic method for estimating the solubilities of model plant surface constituents and relating them to the effects of agrochemicals. Results Following the van Krevelen and Hoftyzer method, we calculated the solubility parameters of three model plant species and eight compounds that differ in hydrophobicity and polarity. In addition, intact tissues were examined by scanning electron microscopy and the surface free energy, polarity, solubility parameter and work of adhesion of each were calculated from contact angle measurements of three liquids with different polarities. By comparing the affinities between plant surface constituents and agrochemicals derived from (a) theoretical calculations and (b) contact angle measurements we were able to distinguish the physical effect of surface roughness from the effect of the chemical nature of the epicuticular waxes. A solubility parameter model for plant surfaces is proposed on the basis of an increasing gradient from the cuticular surface towards the underlying cell wall. Conclusions The procedure enabled us to predict the interactions among agrochemicals, plant surfaces, and cuticular and cell wall components, and promises to be a useful tool for improving our understanding of biological surface interactions.
Resumo:
This dissertation, whose research has been conducted at the Group of Electronic and Microelectronic Design (GDEM) within the framework of the project Power Consumption Control in Multimedia Terminals (PCCMUTE), focuses on the development of an energy estimation model for the battery-powered embedded processor board. The main objectives and contributions of the work are summarized as follows: A model is proposed to obtain the accurate energy estimation results based on the linear correlation between the performance monitoring counters (PMCs) and energy consumption. the uniqueness of the appropriate PMCs for each different system, the modeling methodology is improved to obtain stable accuracies with slight variations among multiple scenarios and to be repeatable in other systems. It includes two steps: the former, the PMC-filter, to identify the most proper set among the available PMCs of a system and the latter, the k-fold cross validation method, to avoid the bias during the model training stage. The methodology is implemented on a commercial embedded board running the 2.6.34 Linux kernel and the PAPI, a cross-platform interface to configure and access PMCs. The results show that the methodology is able to keep a good stability in different scenarios and provide robust estimation results with the average relative error being less than 5%. Este trabajo fin de máster, cuya investigación se ha desarrollado en el Grupo de Diseño Electrónico y Microelectrónico (GDEM) en el marco del proyecto PccMuTe, se centra en el desarrollo de un modelo de estimación de energía para un sistema empotrado alimentado por batería. Los objetivos principales y las contribuciones de esta tesis se resumen como sigue: Se propone un modelo para obtener estimaciones precisas del consumo de energía de un sistema empotrado. El modelo se basa en la correlación lineal entre los valores de los contadores de prestaciones y el consumo de energía. Considerando la particularidad de los contadores de prestaciones en cada sistema, la metodología de modelado se ha mejorado para obtener precisiones estables, con ligeras variaciones entre escenarios múltiples y para replicar los resultados en diferentes sistemas. La metodología incluye dos etapas: la primera, filtrado-PMC, que consiste en identificar el conjunto más apropiado de contadores de prestaciones de entre los disponibles en un sistema y la segunda, el método de validación cruzada de K iteraciones, cuyo fin es evitar los sesgos durante la fase de entrenamiento. La metodología se implementa en un sistema empotrado que ejecuta el kernel 2.6.34 de Linux y PAPI, un interfaz multiplataforma para configurar y acceder a los contadores. Los resultados muestran que esta metodología consigue una buena estabilidad en diferentes escenarios y proporciona unos resultados robustos de estimación con un error medio relativo inferior al 5%.
Resumo:
Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1.