948 resultados para distributed model
Resumo:
27-Channel EEG potential map series were recorded from 12 normals with closed and open eyes. Intracerebral dipole model source locations in the frequency domain were computed. Eye opening (visual input) caused centralization (convergence and elevation) of the source locations of the seven frequency bands, indicative of generalized activity; especially, there was clear anteriorization of α-2 (10.5–12 Hz) and β-2 (18.5–21 Hz) sources (α-2 also to the left). Complexity of the map series' trajectories in state space (assessed by Global Dimensional Complexity and Global OMEGA Complexity) increased significantly with eye opening, indicative of more independent, parallel, active processes. Contrary to PET and fMRI, these results suggest that brain activity is more distributed and independent during visual input than after eye closing (when it is more localized and more posterior).
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
We describe a system for performing SLA-driven management and orchestration of distributed infrastructures composed of services supporting mobile computing use cases. In particular, we focus on a Follow-Me Cloud scenario in which we consider mobile users accessing cloud-enable services. We combine a SLA-driven approach to infrastructure optimization, with forecast-based performance degradation preventive actions and pattern detection for supporting mobile cloud infrastructure management. We present our system's information model and architecture including the algorithmic support and the proposed scenarios for system evaluation.
Resumo:
Previous studies in our laboratory have indicated that heparan sulfate proteoglycans (HSPGs) play an important role in murine embryo implantation. To investigate the potential function of HSPGs in human implantation, two human cell lines (RL95 and JAR) were selected to model uterine epithelium and embryonal trophectoderm, respectively. A heterologous cell-cell adhesion assay showed that initial binding between JAR and RL95 cells is mediated by cell surface glycosaminoglycans (GAG) with heparin-like properties, i.e., heparan sulfate and dermatan sulfate. Furthermore, a single class of highly specific, protease-sensitive heparin/heparan sulfate binding sites exist on the surface of RL95 cells. Three heparin binding, tryptic peptide fragments were isolated from RL95 cell surfaces and their amino termini partially sequenced. Reverse transcription-polymerase chain reaction (RT-PCR) generated 1 to 4 PCR products per tryptic peptide. Northern blot analysis of RNA from RL95 cells using one of these RT-PCR products identified a 1.2 Kb mRNA species (p24). The amino acid sequence predicted from the cDNA sequence contains a putative heparin-binding domain. A synthetic peptide representing this putative heparin binding domain was used to generate a rabbit polyclonal antibody (anti-p24). Indirect immunofluorescence studies on RL95 and JAR cells as well as binding studies of anti-p24 to intact RL95 cells demonstrate that p24 is distributed on the cell surface. Western blots of RL95 membrane preparations identify a 24 kDa protein (p24) highly enriched in the 100,000 g pellet plasma membrane-enriched fraction. p24 eluted from membranes with 0.8 M NaCl, but not 0.6 M NaCl, suggesting that it is a peripheral membrane component. Solubilized p24 binds heparin by heparin affinity chromatography and $\sp{125}$I-heparin binding assays. Furthermore, indirect immunofluorescence studies indicate that cytotrophoblast of floating and attached villi of the human fetal-maternal interface are recognized by anti-p24. The study also indicates that the HSPG, perlecan, accumulates where chorionic villi are attached to uterine stroma and where p24-expressing cytotrophoblast penetrate the stroma. Collectively, these data indicate that p24 is a cell surface membrane-associated heparin/heparan sulfate binding protein found in cytotrophoblast, but not many other cell types of the fetal-maternal interface. Furthermore, p24 colocalizes with HSPGs in regions of cytotrophoblast invasion. These observations are consistent with a role for HSPGs and HSPG binding proteins in human trophoblast-uterine cell interactions. ^
Resumo:
Background A recent method determines regional gas flow of the lung by electrical impedance tomography (EIT). The aim of this study is to show the applicability of this method in a porcine model of mechanical ventilation in healthy and diseased lungs. Our primary hypothesis is that global gas flow measured by EIT can be correlated with spirometry. Our secondary hypothesis is that regional analysis of respiratory gas flow delivers physiologically meaningful results. Methods In two sets of experiments n = 7 healthy pigs and n = 6 pigs before and after induction of lavage lung injury were investigated. EIT of the lung and spirometry were registered synchronously during ongoing mechanical ventilation. In-vivo aeration of the lung was analysed in four regions-of-interest (ROI) by EIT: 1) global, 2) ventral (non-dependent), 3) middle and 4) dorsal (dependent) ROI. Respiratory gas flow was calculated by the first derivative of the regional aeration curve. Four phases of the respiratory cycle were discriminated. They delivered peak and late inspiratory and expiratory gas flow (PIF, LIF, PEF, LEF) characterizing early or late inspiration or expiration. Results Linear regression analysis of EIT and spirometry in healthy pigs revealed a very good correlation measuring peak flow and a good correlation detecting late flow. PIFEIT = 0.702 · PIFspiro + 117.4, r2 = 0.809; PEFEIT = 0.690 · PEFspiro-124.2, r2 = 0.760; LIFEIT = 0.909 · LIFspiro + 27.32, r2 = 0.572 and LEFEIT = 0.858 · LEFspiro-10.94, r2 = 0.647. EIT derived absolute gas flow was generally smaller than data from spirometry. Regional gas flow was distributed heterogeneously during different phases of the respiratory cycle. But, the regional distribution of gas flow stayed stable during different ventilator settings. Moderate lung injury changed the regional pattern of gas flow. Conclusions We conclude that the presented method is able to determine global respiratory gas flow of the lung in different phases of the respiratory cycle. Additionally, it delivers meaningful insight into regional pulmonary characteristics, i.e. the regional ability of the lung to take up and to release air.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
Domestic violence is a major public health problem, yet most physicians do not effectively identify patients at risk. Medical students and residents are not routinely educated on this topic and little is known about the factors that influence their decisions to include screening for domestic violence in their subsequent practice. In order to assess the readiness of primary care residents to screen all patients for domestic violence, this study utilized a survey incorporating constructs from the Transtheoretical Model, including Stages of Change, Decisional Balance (Pros and Cons) and Self-Efficacy. The survey was distributed to residents at the University of Texas Health Science Center Medical School in Houston in: Internal Medicine, Medicine/Pediatrics, Pediatrics, Family Medicine, and Obstetrics and Gynecology. Data from the survey was analyzed to test the hypothesis that residents in the earlier Stages of Change report more costs and fewer benefits with regards to screening for domestic violence, and that those in the later stages exhibit higher Self-Efficacy scores. The findings from this study were consistent with the model in that benefits to screening (Pros) and Self-Efficacy were correlated with later Stages of Change, however reporting fewer costs (Cons) was not. Very few residents were ready to screen all of their patients.^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
This paper assesses the impact of climate change on China's agricultural production at a cross-provincial level using the Ricardian approach, incorporating a multilevel model with farm-level group data. The farm-level group data includes 13379 farm households, across 316 villages, distributed in 31 provinces. The empirical results show that, firstly, the marginal effects and elasticities of net crop revenue per hectare with respect to climate factors indicated that the annual impact of temperature on net crop revenue per hectare was positive, and the effect of increased precipitation was negative when looking at the national totals; secondly, the total impact of simulated climate change scenarios on net crop revenues per hectare at a Chinese national total level, was an increase of between 79 USD per hectare and 207 USD per hectare for the 2050s, and an increase from 140 USD per hectare to 355 USD per hectare for the 2080s. As a result, climate change may create a potential advantage for the development of Chinese agriculture, rather than a risk, especially for agriculture in the provinces of the Northeast, Northwest and North regions. However, the increased precipitation can lead to a loss of net crop revenue per hectare, especially for the provinces of the Southwest, Northwest, North and Northeast regions.
Resumo:
The geometries of a catchment constitute the basis for distributed physically based numerical modeling of different geoscientific disciplines. In this paper results from ground-penetrating radar (GPR) measurements, in terms of a 3D model of total sediment thickness and active layer thickness in a periglacial catchment in western Greenland, is presented. Using the topography, thickness and distribution of sediments is calculated. Vegetation classification and GPR measurements are used to scale active layer thickness from local measurements to catchment scale models. Annual maximum active layer thickness varies from 0.3 m in wetlands to 2.0 m in barren areas and areas of exposed bedrock. Maximum sediment thickness is estimated to be 12.3 m in the major valleys of the catchment. A method to correlate surface vegetation with active layer thickness is also presented. By using relatively simple methods, such as probing and vegetation classification, it is possible to upscale local point measurements to catchment scale models, in areas where the upper subsurface is relatively homogenous. The resulting spatial model of active layer thickness can be used in combination with the sediment model as a geometrical input to further studies of subsurface mass-transport and hydrological flow paths in the periglacial catchment through numerical modelling.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.